Storing data is no easy feat. It can require huge and expensive data centers to store information, often at great cost.
But a team of researchers from the Technical University of Delft (TU Delft) in the Netherlands has just made a breakthrough that could change all that. Described in the journal Nature Nanotechnology, they have used individual atoms to store data.
To perform this incredible feat, the team used a scanning tunneling microscope (STM) with an extremely sharp needle to push atoms around. They found that chlorine atoms on a copper sheet formed a perfect grid, allowing them to change the arrangement and store information.
In this case, they created a memory of 1 kilobyte (8,000 bits). Scaling up, this could reach a storage density of 500 terabits per square inch (tbpsi), which is 500 times better than anything commercially available today.
“In theory, this storage density would allow all books ever created by humans to be written on a single post stamp,” said lead scientist Sander Otte from Delft in a statement.
A video explanation of the process is above
Otte explained that the process works a bit like a sliding puzzle. "Every bit consists of two positions on a surface of copper atoms, and one chlorine atom that we can slide back and forth between these two positions,” he said. “If the chlorine atom is in the top position, there is a hole beneath it – we call this a one. If the hole is in the top position and the chlorine atom is on the bottom, then the bit is a zero."
At the moment, this process requires the memory to be kept in clean vacuum conditions at liquid nitrogen temperature, 77 Kelvin (-196°C, -321°F). But the researchers say the method is scalable – so perhaps it can be used on a larger scale in the future.
To prove the technique was possible, the team stored a section of a lecture by physicist Richard Feynman called "There’s Plenty of Room at the Bottom" on a tiny area just 100 nanometers across. This involved using 8,000 “gaps” in a grid of chlorine atoms, and it remained stable for more than 40 hours.
Aside from the equipment needed, another limitation at the moment is the read and write speed, which takes several minutes. But the experiment serves as a proof of concept and, maybe one day, our data centers won’t be quite so big anymore.