Premium
This is an archive article published on August 6, 2023

Researchers successfully train deep learning model to steal data from Keyboard keystrokes

The model trained by British researchers can understand what you are typing by simply listening to keystrokes on your keyboard.

Acoustic attack | Machine learning keylogger | Skype Zoom keyboard loggerResearchers were also be able to determine what participants in a Zoom or Skype meeting were typing. (Image Source: Pixabay)
Listen to this article
Researchers successfully train deep learning model to steal data from Keyboard keystrokes
x
00:00
1x 1.5x 1.8x

A deep learning model can steal sensitive information like usernames, passwords and messages by listening to what you type on your keyboard. Trained by a team of researchers from British universities, the sound-recognising algorithm can capture and decipher keystrokes recorded from a microphone with 95 per cent accuracy.

According to BleepingComputer, when the model was tested with the popular video conferencing solutions Zoom and Skype, the accuracy dropped to 93 per cent and 91.7 per cent.

The algorithm sheds light on how deep learning can be potentially used to develop new types of malware which can listen to keyboard strokes to steal information like credit card numbers, messages, conversations and other personal information.

Story continues below this ad

The recent advancements in machine learning combined with the availability of cheap high-quality microphones in the market make sound-based attacks more viable compared to other methods that are often limited by factors like data transfer speed and distance.

Test setup The MacBook Pro has the same keyboard as other recent Apple laptops. (Image Source: arxiv.org)

How does it work?

To train the sound-recognising algorithm, the researchers captured data by pressing 36 keys on a MacBook Pro 25 times each and recording the sound produced by those keys. The audio was captured using an iPhone 13 mini that was 17 cm away from the laptop.

From the recordings, waveforms and spectrograms were produced which distinguished each key. The distinct sound of each button was then used to train an image classifier called ‘CoAtNet’,  which predicted which key was pressed on the keyboard.

However, the technique does not necessarily require access to the device microphone. Threat actors can also join a Zoom call as a participant to listen to keystrokes from users and infer what they are typing.

Story continues below this ad

According to the research paper, users can protect themselves from such attacks by changing their typing patterns or using complex random passwords. White noise or software that mimics keystroke sounds can also be used to make the model less accurate.

Since the model was highly accurate on keyboards used by Apple on laptops in the last two years, which are usually silent, it is highly unlikely that switching to silent switches on a mechanical keyboard or entirely switching to membrane keyboards will help.

Currently, the best way to deal with such sound-based attacks is using biometric authentication like a fingerprint scanner, face recognition or an iris scanner.

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement