Reality algorithms

When the computer revolution started it essentially prompted the research into algorithms and algorithmic complexity.

Wikipedia describes an algorithm as “a procedure (a finite set of well-defined instructions) for accomplishing some task which, given an initial state, will terminate in a defined end-state.”

I believe that in the same way that this definition applies to algorithms Turing machines there will be a similar definition for molecular algorithms. Von Neumann machines are self-replicating nanobots that carry out procedures to accomplish tasks using real matter as a substrate rather than electrical pulses in a fixed structure.

The aspect of this that interests me is the study of algorithmic complexity. Computational algorithms can be classified based on how the time for completion scales with the number of elements involved. Algorithms can be divided up into several compexity classes which have their own subtleties but generally you can have algorithms that take linear, polynomial and exponential amounts of time based on the amount of input.

There are often many ways to achieve the same end point of an algorithm. The process of sorting a list of elements is a prime example which range from the incredibly slow Bubble sort to the quick binary tree sort (yes Quicksort is another quick algorithm but its worst case timing isn’t as good as binary tree sort). I wonder if the same thing will be applied to carrying out molecular algorithms? What is the best way to replicate yourself 1000 times? What is the best order to construct a protein from amino acids while it is trying to fold itself? Will the same algorithms we use for computation be applicable to molecular engineering?



0 comments ↓

There are no comments yet...Kick things off by filling out the form below..

Leave a Comment