Scientists Reveal Genome’s Origami Shape With 3-D Maps

Screen shot via Rice University

Each cell in your body contains about 3 billion nucleotides, which is enough DNA to stretch out for about 1.8 meters. This huge amount of genetic material is able to fold up and be compressed to a very small space within the cell’s nucleus. This feat requires folding patterns that are incredibly precise. For the first time, a group of researchers have created a high-resolution map that demonstrates the intricate origami loops of the genome. Erez Lieberman Aiden of Rice University served as senior author of the paper, which was published in Cell

The map looks at blocks of the genome that are 1,000 base pairs long in search of “loops” where distant parts of the genetic sequence are brought into close proximity to one another. The team was able to study 10,000 of these loops and their folding patterns. 

“More and more, we’re realizing that folding is regulation,” co-first author Suhas Rao of Baylor University said in a press release. “When you see genes turn on or off, what lies behind that is a change in folding. It’s a different way of thinking about how cells work.”

The folding is able to regulate genetic expression by preventing transcription factors from gaining access to the DNA. Understanding where these loops occur uncovered thousands of regulatory points that were previously unknown. In the future, this information could be used to understand the underlying cause of cancer or genetic diseases.

The current paper is a continuation of the work Aiden’s lab has been doing over the last five years. The team developed the Hi-C method in 2009. Named after the juice box, the Hi-C method investigates the 3-D structure of condensed DNA to find where genes are located relative to one another. The new study examines the genome with much greater detail, making it better suited for biological research.

“In 2009, we were dividing the genome into 1-million-base blocks, and here we are dividing it into 1,000-base blocks,” added co-first author Miriam Huntley of Harvard. “Since any block can collide with any other block, we end up with a problem that is a millionfold more complicated. The overall database is simply vast.”

Processing all of the information to yield such a detailed map required a tremendous amount of computing power. Rather than use the central processing unit (CPU) of the computer, the team relied on the graphics processing unit (GPU). The GPU is better able to process large amounts of data at once, rendering the hi-res map in a fraction of the time it would have taken a CPU. Additionally, they were able to eliminate a lot of the “noise” that happens when the data isn’t very clear, making for a sharper map that was visually appealing.

“When studying big data, there can be a tendency to try to solve problems by relying purely on statistical analyses to see what comes out, but our group has a different mentality,” Rao concluded. “Even though there was so much data, we still wanted to be able to look at it, visualize it and make sense of it. I would say that almost every phenomenon we observed was first seen with the naked eye.”

 

 

Comments

If you liked this story, you'll love these

This website uses cookies

This website uses cookies to improve user experience. By continuing to use our website you consent to all cookies in accordance with our cookie policy.