Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Feynman: There's Plenty of Room at the Bottom (1959) (zyvex.com)
138 points by MaysonL on Dec 29, 2013 | hide | past | favorite | 35 comments


A great article. I suspect the shortest route to atomically-precise manufacturing will be by engineering biology since biology already knows how to do it. And there is even already a high-school competition as Feynman suggested at the end of that article: http://en.wikipedia.org/wiki/International_Genetically_Engin...


I'm pretty sure that's not going to be "the shortest route," J, but it may be "a route" eventually. Samsung already produces a NAND flash chip at 10 nm. The lattice spacing in crystalline Si is about 0.54 nm, so we're talking about a transistor feature size of a bit more than 18 atoms. These are available on the market as we speak, and you might even be using one right now if you have a sweet new Samsung SSD.

Proof-of-concept Si FETs have been made down to around 3 nm, last time I checked, or around 6 atoms. Grossly speaking, these are essentially refinements of photolithographic CMOS processes. Soon we'll hit a wall with this, but for now, I think atomically-precise manufacturing is already pretty achievable without engineering biology.


Biology does not do atomically-precise manufacturing. It does do a lot of sorting and filtering of random interactions, and in a few places (the ribosome), precision alignment of large complex basic units (amino acids). It is very, very unlikely that we will develop Drexlerian nanotech via biological processes.


The whole issue of whether or not nanotech would end up just looking like enzymes was a big part of the famous Drexler/Smalley debate (http://en.wikipedia.org/wiki/Drexler%E2%80%93Smalley_debate_...)

I always found Smalley's arguments that nanotech would end up looking like biology convincing. Not that I think it matters much: would designing organisms that take the shape of a part and excrete an "exoskeleton" to realize it any less cool than assembling the same thing atom by atom?


I wouldn't call amino acids "large complex basic units" at all, they may be bigger than an atom but 10-20 heavy atoms is hardly complex. Plus many amino acids themselves are synthesized in vivo.

Moreover, this misses the point somewhat, namely that the resulting peptide chain folds into a protein with essentially atomically precision. Atoms may not be placed one by one, but an atomically precise structure is created.


When the machinery you want to lay out is composed of carbon atoms placed at sub-nanometer scale, 10-20 heavy atoms is like boxing gloves.


Biology does not do atomically-precise manufacturing.

It gets pretty damned close, no?

What's gene reading and writing then? If not atomic level, small molecules. Base pairs are 13-15 atom molecules.


Transcription is highly specialized, and still makes lots of errors.


Biology > Technology

I don't know why technologists and physicists underestimate and neglect nature as "not good enough, we can do better" every single time. There is so much ignorance that causes unnecessary friction and hate. To me nature is the most powerful thing in existence and consists of more than our universe, it is responsible for the laws of physics, life and everything. There is so much we will have to discover and what we will discover at the end is that all of what we found out has existed already for aeons and is mostly in use already by nature (if efficient, from it's perspective) somewhere out there.

Let me explain that argument by an example why our whole technology evolved only because of bio- or natural mimicry. Physicists who are amongst the most disconnected individuals from nature, were actually those who first learnt by "observing" it. Newton, Galileo, Einstein and many more, have used their incredibly unique perspective and way to observe the nature of the things they wanted to understand. By that, they have evolved or discovered ideas, concepts and later the logic that explains the things they observed.


... which are corrected.


Through what mechanism?


Really not my area of expertise, but, generally, reasonably effective ones.

Apparently, DNA polymerase can perform that function:

http://www.torna.do/s/Error-correction-during-DNA-replicatio...

DNA polymerase (DNAP) is a dual-purpose enzyme that plays two opposite roles in two different situations during DNA replication. It plays its a normal role as a polymerase catalyzing the elongation of a new DNA molecule by adding a monomer. However, it can switch to the role of an exonuclease and shorten the same DNA by cleavage of the last incorporated monomer from the nascent DNA.

https://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=tr...

DNA error correcting codes: No crossover.

DNA error correcting codes over the edit metric create embeddable markers for sequencing projects that are tolerant of sequencing errors. When a sequence library has multiple sources for its sequences, use of embedded markers permit tracking of sequence origin. Evolutionary algorithms are currently the best known technique for optimizing DNA error correcting codes. In this study we resolve the question of the utility of the crossover operator used in earlier studies on optimizing DNA error correcting codes. The crossover operator in question is found to be substantially counterproductive. A majority of crossover events produce results that violate minimum-distance constraints required for error correction. A new algorithm, a form of modified evolution strategy, is tested and is found to locate codes with record size. The table of best know sizes for DNA-error correcting codes is updated.


Transcription is the process where RNA is generated from a DNA template.

The observed error rate is about 10^-5 per nt. This is the error rate after nucleolytic proofreading.


As I said: really not my area of expertise.

The point is that at multiple levels, biological functions are operating at the molecular and/or atomic level. The ATP/ADP reaction is another that comes to mind, though it's an energetic transformation involving cleaving/joining of a single atom from a molecule.

As for proofing methods: sometimes a transcription is in error. What then? The problem may be caught further upline: a protein is synthesized incorrectly and destroyed, a cell behaves improperly and the body's immune responses remove it, an imperfect embryo is formed and is reabsorbed or stillborn.

None of which denies the fundamental fact that biological processes occurring at the molecular/atomic level are in fact commonplace.


Enzymes can add hydroxyl groups to molecules in specific locations. A hydroxyl group contains only two atoms. Enzymes can also remove a couple of hydrogen atoms and their electrons in particular places of a molecule. By combining these and similar reactions, cells can do extremely specific remodeling of compounds. For example, a single bond in a fatty acid chain can be turned into a double bond by the dehydrogenation reaction mentioned before, and then hydration and subsequent oxidation yields a keto group. The overall change represent the addition of ONE oxygen to the molecule in the same position.

This means that beta oxidation (the metabolic pathway where those reactions take place) of fatty acids involve manipulation of molecules at the atomic level. Drexler describes a path from biochemistry to molecular manufacturing in Engines of Creation. In fact, part of his argument is that biochemistry is similar to a great extent to the molecular machines he envisions.


This is not atomically precise, mechanosynthetic manufacturing. Can you use such a construction to create carbon nanotubes or equivalent and then lay them out in a three dimensional lattice structure with nanometer precision and six-sigma workpiece reliability?

Biology does not scale to Drexlarian nanotech. The domains are completely different.


"Can you use such a construction to create carbon nanotubes or equivalent and then lay them out in a three dimensional lattice structure with nanometer precision and six-sigma workpiece reliability?"

I can use biochemistry to build second-generation machines which might be capable of doing that. We might not have a diamondoid nanomachine but we can eventually synthesize it, since biology gives us atomically precise positioning of atoms. This looks like a feasible way to develop Drexlerian nanotech.

An analogy with computer programming could be that biology is like assembly language. Using that you can make a higher-level language such as C, and from there you can develop much more powerful abstractions and technologies (Python, Perl, Lisp, Ruby, etc.)

The domains of biochemistry and molecular manufacturing are not completely different because the former could be the foundation of the latter. Also, the science behind biology can inform nanotech. Transforming mechanical energy into chemical processes and viceversa are common processes in cells (e.g., motor proteins.), this is very similar to the type of processes molecular nanotech aspires to make.

Another thing is that cells are capable of correcting errors in DNA synthesis to a substantial degree, 1 mistake for every 1 to 10 billion nucleotides. Reliability isn't an insurmountable problem in biology.

"This is not atomically precise, mechanosynthetic manufacturing."

But we can agree that biology can do atomically precise synthesis. And enzymatic catalysis can sometimes be described in terms of mechanical bending of molecules, such as the ATP synthase.

If biochemistry were so irrelevant to molecular manufacturing, Drexler wouldn't have written so many pages talking about it or suggesting it as way to develop nanotech.


Why can't you use biology to do what you want there? Proteins have ~0.05nm precision at placing atoms and are programmable in three dimensions and biology is pretty good at working with carbon. It exceeds our genetic engineering ability today, but we're getting better. Semiconductor Research Corp recently put some money towards this: http://www.src.org/newsroom/press-release/2013/521/


For those who think that nanotechnology is too mundane, what about machines made from nucleons?

https://en.wikipedia.org/wiki/Femtotechnology

Greg Egan speculates on femtotechnology in some of his sci-fi, mostly as insanely fast computers. (I mean, jeez, it takes SO MUCH TIME for an electron to whirl around the nucleus of an atom... it's so much faster when your computer is the nucleus.)

An excerpt from a Egan short story: http://gregegan.customer.netspace.net.au/SCHILD/00/SchildExc...

More on femtocomputing: http://hplusmagazine.com/2011/11/01/femtocomputing/


As a physicist and mechanical engineer ... I have absolutely no idea how femtotechnology is supposed to work. The entire premise seems to be: "quarks are smaller!" So? You know what you get when you put a bunch of sub-atomic particles together? Atoms.

I would like very much to see an actual femtocomputer design, one which has a chance of working (as a commentator in the h+ article points out: "Unfortunately, this relies on assumptions that contradict known physics and ones that have not yet been proven."), so I could at least understand what it is supposed to be...


QCD's strong coupling makes it hard to imagine how one could create programs for quarks with bounded errors. The linked article seems to suggest using gluons/interactions as gates for the colors on the quarks. This is so totally bonkers it is hard to know where to start, but perhaps an easy place is: how would you go about isolating a single quark and a single gluon in the horrible nonperturbative mess that is the QCD vacuum[0].

If you want to program the nucleons in the nucleus, you can switch your description to chiral perturbation theory[1], which is weakly coupled, but you need to be able to shoot individual pions at individual nuclei, which would be extremely difficult, and might require enough energy to liberate the target nucleon from the rest of the nucleus, anyway, destroying your "computer".

[0] https://en.wikipedia.org/wiki/QCD_vacuum

[1] https://en.wikipedia.org/wiki/Chiral_perturbation_theory


Thank you for reminding me of Dragon's Egg, https://en.wikipedia.org/wiki/Dragon%27s_Egg Read it many years ago, was great fun. I guess it "inspired" one of the better Star Trek Voyager episodes, Blink of an Eye, http://en.memory-alpha.org/wiki/Blink_of_an_Eye_(episode)


And we are almost there. People currently print circuits with 18nm features, that's about 180 atoms long, or a square with 32 thousand atoms in the surface. We are still limited to planar designs on silicon, but that's just a couple of breakthroughts from 3D printing once we get precision enough.

But that's the talk that launched the idea of molecular nanotech. That side note is far away (or not, who knows?), but the main line is almost here.


Chips are built differently from your standard 3D printing process: 3D is additive or subtractive, while chips are mask-deposit-strip (filter, additive, subtractive?). Keep in mind that the reason we can get to those sizes on microchips is through mind-boggling optical engineering on the masking step. I remember a prof explaining in the clean room over one of our Perkin-Elmer lithography stations that it takes years to redesign the systems for new wavelengths; the application's precision demands that each optical system is specifically engineered and tailored to that wavelength.

Get rid of the masking stage and you've got the missing link to 3D printing. But that would require nigh magical levels of control over a particle beam. Which could totally be solvable to a sufficiently clever team of engineers and a sufficiently robust and controllable beam source. But add that basically nothing behaves itself on those scales, especially machines, and there's still tons of work left.

But you're right like Feynman was all those years ago: you can totally imagine it is plausible, so there's no way we won't try! (Great, now I want to go back into microtech...)


> Get rid of the masking stage and you've got the missing link to 3D printing. But that would require nigh magical levels of control over a particle beam.

Quite like in an electron microscope. Ok, most electron microscopes are way less precise than top of line lithography, but there there are some that are more. 3D printing with controlled ion deposition would be expensive as hell, but I see no reason why it could not be done.

Anyway, I was talking about iterating the filter - add - subtract pattern. Our current low precision is the reason we only iterate a few times. With increased precision, we can do it more. Yep, any advance here requires years and a lot of investiment. But they always come through.


Now that you mention it, the setup I imagined is dang near identical to an SEM, but with atoms. All we need to do is replace the filament with a mass spectrometer's filter and bam! Instant micro-3D printer (where instant is ten-twenty years of R&D :). I'd be willing to devote a decade of my life working on that.

As for the second half, I apologize. I didn't realize you were thinking in terms of the long now. I think you're completely correct: we will spend the time and effort to make it better and faster. I have no doubt industry will come through for us in this. We got to supercomputers in our pockets in less than one lifetime, after all!


We've hit the limit of how far you can scale down lithography however.


Sorry, but I don't belive that. We are still using light, for a start.

We've ended More's law, and we are on the limit on how small we can create a usefull MOSFET. But there's still margin to improve, it's "just" harder now.


"Of course, a small automobile would only be useful for the mites to drive around in, and I suppose our Christian interests don't go that far."

I so want to make and have this tiny car and give mites driving permits omg.



To point out what everyone's overlooking: this was written in 1959, which means the wildest sci-fi nanotechnology he's imagining here... is, basically, achieved by magnetic HDDs since around 2011.

"Why cannot we write the entire 24 volumes of the Encyclopaedia Brittanica on the head of a pin?"

"Let's see what would be involved. The head of a pin is a sixteenth of an inch across. If you magnify it by 25,000 diameters, the area of the head of the pin is then equal to the area of all the pages of the Encyclopaedia Brittanica. Therefore, all it is necessary to do is to reduce in size all the writing in the Encyclopaedia by 25,000 times. Is that possible? The resolving power of the eye is about 1/120 of an inch – that is roughly the diameter of one of the little dots on the fine half-tone reproductions in the Encyclopaedia. This, when you demagnify it by 25,000 times, is still 80 angstroms in diameter – 32 atoms across, in an ordinary metal. In other words, one of those dots still would contain in its area 1,000 atoms. So, each dot can easily be adjusted in size as required by the photoengraving, and there is no question that there is enough room on the head of a pin to put all of the Encyclopaedia Brittanica."

By his assumption, a pinhead is about 0.003 inches^2 (2 mm^2). The current Encylopedia Brittanica has about 300 million English characters [0], which is about 300 MB in a reasonable text encoding (although it should compress [1] to around 60 MB). So, what Feynman is speculating about, translates in digital language to a memory density of 300 MB/(0.003 in^2) or 100 GB/in^2 or 800 Gb/in^2. This is roughly an average magnetic HDD from 2011 [2].

To emphasize the point: Feynman is speculating about a dot "80 angstroms [8 nanometers] in diameter - 32 atoms across, in an ordinary metal". This is actually the size of a magnetic domain on a HDD platter -- wikipedia gives it as 10 nm [3].

Unfortunately, there now exists a far larger English-language encyclopedia which is 9 GB compressed [4] or 75 GB with images [5]. Using Wikipedia as the new benchmark, it is currently not possible to compress it onto the head of a pin.

[0] https://en.wikipedia.org/wiki/Wikipedia:Size_comparisons#Com...

[1] https://en.wikipedia.org/wiki/Entropy_%28information_theory%... ("1.5 bits per character")

[2] http://storageconference.org/2013/Papers/2013.Paper.01.pdf

[3] https://en.wikipedia.org/wiki/Magnetic_storage#Design ("Magnetic grains are typically 10 nm in size...")

[4] https://en.wikipedia.org/wiki/Wikipedia:Database_download#En...

[5] http://xowa.sourceforge.net/image_dbs.html


I believe it's only 2gb without images, 9gb with, but I may be incorrect.


Check again. The text-only snapshot is 44 GB of XML compressed to a 9.5 GB .bz2 -- that's just the current text of the English-language wikipedia. And XOWA's snapshot, with images, is 75 GB compressed.


wow, thanks for sharing this!


an application of DNA as a building block: http://www.youtube.com/watch?v=-5KLTonB3Pg




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: