Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all articles
Browse latest Browse all 12749

Coming to an SSL library near you? AI learns how to craft crude crypto all by it ...

$
0
0

Coming to an SSL library near you? AI learns how to craft crude crypto all by it ...

Neural networks trained by researchers working at Google Brain can create their own cryptographic algorithms but no one is quite sure how it works.

Neural networks are systems of connections that are based loosely on how neurons in the brain work. They are often used in deep learning to train AI models to complete a specific task, such as playing Go, recognizing speech or predicting the outcome of situations.

Neural networks aren’t traditionally found near cryptography, however a paper written by Google Brain gurus Martn Abadi and David Andersen, and published by arXiv, shows that they can learn how to encrypt and decrypt information exchanged between each other.

Alice is forever trying to send secret messages to Bob. First, the information is given to Alice, a neural net, as an input (P). The message is scrambled into gibberish (C) for Bob to decrypt. Eavesdropper Eve also gets a copy of the ciphertext C and tries to decrypt it. Bob and Eve are both separate neural networks.

To make it easier for Bob, and harder for Eve, Alice and Bob both have a key (K) that Alice uses to encrypt the input message P and Bob uses to decrypt. The key is provided as an input for Alice and Bob and a fresh key is generated per message. The idea is that Alice creates some way to encrypt the message P with the shared key K to form the ciphertext C. Then Bob has to figure out how C was calculated from P and K.

If you repeat this experiment enough times, Alice and Bob will develop their own algorithm for encrypting and decrypting the message that Eve cannot crack without the key.


Coming to an SSL library near you? AI learns how to craft crude crypto all by it ...

The neural nets repeatedly train on input tuples containing 256 to 4,096 entries. After training the neural nets to pass an accuracy threshold, the reconstruction error rate between Alice and Bob is 0.05 bits, while Eve should be achieving only 1 to 2 bits more than a random guessing rate.

Eve is reset after each round and Alice and Bob have to achieve the target threshold within 150,000 steps or it is considered a training failure.

At first, Eve is pretty good at cracking the messages. But after 10,000 steps Alice and Bob are better at communicating with each other and the cryptographic algorithms learned to make it too difficult for Eve to eavesdrop. By 150,000 steps Eve can only correctly guess 8 out of 16 bits in the message the same rate as pure chance.

Although impressive, the cryptographic algorithms aren’t yet practical. It can’t be guaranteed how good the encryption is, as the researchers don’t understand the algorithms themselves.

Machine learning isn’t very transparent and the magic behind it is often locked in a so-called “black box.”

“Neural networks are notoriously difficult to explain, so it may be hard to characterize how the component functions,” the paper said.

It could be useful if neural networks are used together with classical cryptography methods, however. Although the classical cryptographic algorithms are more transparent, they are not as good as neural networks at selecting what information to encrypt.


Viewing all articles
Browse latest Browse all 12749

Latest Images

Trending Articles





Latest Images