Table of Contents
Introduction
In this article, I’ll be comparing the encryption standards we consider safe with the actual technology and what could make them not be safe anymore during the following years. I will also try to analyze how we can prepare for the future at different scales in our industry, software development.
First of all, I want to expose the two main types of encryption that the majority of software we use relies on. This is mostly based on what the USA’s National Institute of Standards and Technology (NIST) states, as of today, in 2022.
Symmetric Encryption
On one side we have symmetric encryption, which uses a single private key (or secret) to encrypt and decrypt data. The secret needs to be shared among the parties that need to communicate securely, this is in fact a security risk and its weakness. Together with other measures to ensure a secure key transfer it can be a good option, as it is cryptographically safer than other methods.
The most used algorithm is Advanced Encryption Standard (AES), which until now has not been cracked, and there are multiple versions of it like AES-128, AES-196, and AES-256 (being the numbers in the version the length of the key in bits). An issue of this kind of algorithms is that they are vulnerable to brute force attacks and they rely on key length as a way to strengthen the encryption against these attacks. Note that with actual classical computers, it would take an immense amount of time (thousands of trillions of years, even bigger than the lifetime of the universe) to crack any of the AES key-length versions.
There were also other attacks, like Meet-in-the-Middle (MITM), that compromised algorithms we used as a standard in the recent past (or still do). One example of this is the DES (Data Encryption Standard) family, which in its first version was crackable by brute force with only 256 operations (it took 22 hs in 1997 and takes 5 minutes today). Its successor Double DES fixed this by encrypting twice with two different keys but is still the one that ended up being vulnerable to MITM attack it was then replaced by Triple DES which is also vulnerable to more than one attack (including MITM) and the only solution is lengthening the key, again, or making more operations with the keys in order to add complexity.
Symmetric encryption is mostly used to encrypt data in operations that need to be fast, like banking and credit card transactions, it is also used in Transport Layer Security (TLS) protocol all over the internet, in encrypted data storage, and in many more applications.
Asymmetric encryption
On the other hand, we have asymmetric encryption. This type of encryption is based on a key pair, private and public keys. The private one is used to encrypt and should not be shared. The public key is used to decrypt together with the respective recipient’s private key and it can be as its name states, public. Even though it exposes some information.
The most well-known and actual standard for asymmetric encryption is the RSA (Rivest Shamir Adleman) protocol. It uses two large prime numbers to generate the private-public key pair and it is commonly used as a key exchange algorithm. Thanks to RSA we can exchange private keys to use symmetric encryption in a more secure way, and thus create other complex solutions for different threats and use cases. For example, it is used along with AES in TLS, to exchange its private keys and warranty digital integrity. We can find it too in email encryption services, digital signatures, blockchain, and more.
As mentioned above, this method’s security relies on factoring two large prime numbers with a specific equation. These two numbers are the private key and the result of the equation is the public key. To brute force this task with classical computing an attacker would need 300 trillion years to crack an RSA key pair. This does not look like a threat, right? Well, the vulnerability is still there, and 300 trillion years in this case is just a way to relate the number of computational operations and the time elapsed for each of those operations. But this is only based on the framework we understand today for computational power mechanics.
The Threat
Until now we could say that our only resource to save ourselves from most of the attacks for symmetric encryption is to have a longer key and/or add computational work time (or complexity) for the creation of it. And this way it would take an amount of “computation time” that we consider far from possible to crack our systems with brute force or other attacks we know today. But how is it that we consider those times impossible? Well, that is based on Moore’s Law, which states that computational power doubles every two years, and at that pace, it really looks difficult to reach the goals needed to break actual cryptography in the near future.
What if I told you that the 300 trillion years needed to break RSA, would be seconds for a sufficiently powerful quantum computer? Yes, seconds. This being said, it does not sound too crazy to expect that symmetric encryption could be cracked in a reasonable time with this new technology. It would be a matter of improving the field at the pace it is doing it now, and that does not mean it cannot happen even faster if there is an important breakthrough. With what we know now on the matter, it is expected to have a quantum computer with the required power to crack RSA by 2030. This is not that far, right?
The NIST already has a proposed plan to transition to new post-quantum standards, but in past history, their standards had been wrong and had vulnerabilities (DES for example was under the list the NIST considered secure not so long ago). Also, we have no certainty at all that the new encryption schemes that the NIST will propose as standards are going to be quantum-proof once that technology rolls out.
I also think it is extremely important to note how the future for software looks for the coming years because it is normal to hear that data that needs to be secure today probably does not need to be so anymore “when quantum-computing is around” and other similar affirmations. And in a world where everything seems to be heading to software that works over harvested data (like Artificial Intelligence) and centralized services that host user data that will be there when quantum comes, I don’t see any reason not to worry about being as secure as possible for that moment. Decentralized services that are emerging based on today’s encryption technology will also be vulnerable to this threat and we better have alternatives to prevent the chaos that could arise. Basically most of our exposure to encryption today is in danger and the earliest we start to think about how to transition to this new cryptographic paradigm, the better.
The Transition
First, let’s try to look for ways that this threat could be handled.
One approach is what the NIST is already supposed to be tackling, which is developing new cryptographic schemes that are quantum-proof along with a plan to migrate from the actual ones to those on time (hopefully).
Another option would be to build quantum-proof solutions on top of the infrastructure we have today, to give a familiar example, let’s mention TLS. A quantum-proof layer could be built over TLS to protect it from quantum exploits.
But what is pretty certain is that the transition is not going to be easy or short. It seems like we are going to be in an environment where lots of new technologies will emerge and we as developers will have to adapt to it. Keeping in mind in mind always, what we are building on today, might not work as we expect in the future.
As software developers, we have the tools to be prepared for this era. The most important one is “good practices”. We already know how to build scalable software, and we can start by thinking of this transition as one kind of scaling. So if we use good abstraction practices, for example, we are heading in the right direction. If we make sure we have maintainable code, and actually do the maintenance, deprecate what is needed, update dependencies, and keep track of them (AES-128 and RSA are probably your dependencies); then we should be in a good position to create software that is ready for this next phase of digital information.
On the side of organizations, the process should be handled properly too, as they are the ones that will fund this development. Organizations are going to invest money on the approaches mentioned above, and will actually implement them. So I think that the focus should be on making sure that their teams are prepared for what is coming and that their software is built in a way that lets them maintain it in the future.
All this being said, it looks like we should (or must) be heading to an industry where companies invest and focus on quality and flexibility. With good-quality software and prepared engineers, the industry should adapt well to this fast-changing environment and prevent as many catastrophes as possible.
With the help of all the entities involved, like NIST and many more organizations that work towards standardization and cyber security, optimistically we should have a solid framework to rely on as time goes by and technology improves.
Author
-
iOS Developer working in the software development industry with agile methodologies. Skilled in Swift, objective-C, Python, PostgreSQL, SQL, PHP, and C++. More than 8 years of experience working as iOs Developer, following the best practices.
View all posts