Hacklet 68 – Rocket Projects

There’s just something amazing about counting down and watching a rocket lift off the pad, soaring high into the sky. The excitement is multiplied when the rocket is one you built yourself. Amateur rocketry has been inspiring hackers and engineers for centuries. In the USA, modern amateur rocketry gained popularity after Sputnik-1, continuing on through the space race. Much of this history captured in the book Rocket Boys by Homer Hickam, which is well worth a read. This week’s Hacklet is dedicated to some of the best rocketry projects on Hackaday.io!

rocket1We start with [Sagar] and Guided Rocket. [Sagar] is building a rocket with a self stabilization system. Many projects use articulated fins for this, and [Sagar] plans to add fins in the future, but he’s starting with an articulated rocket motor. The motor sits inside a gimbal, which allows it to tilt about 10 degrees in any direction. An Arduino is the brain of the system. The Arduino gathers data from a MPU6050 IMU sensor, then determines how to steer the rocket motor. Steering is accomplished with a couple of micro servos connected to the gimbal.

 

rocket2Next up is [Howie], with Homemade rocket engine. [Howie] is cooking some seriously hot stuff on his stove. Rocket candy to be precise, similar to the fuel [Homer Hickam] wrote about in Rocket Boys. This solid fuel is so named because one of the main ingredients is sugar. The other main ingredient is stump remover, or potassium nitrate. Everything is mixed and heated together on a skillet for about 30 minutes, then pushed into rocket engine tubes. It goes without saying that you shouldn’t try this one at home unless you’re really sure of what you’re doing!

 

rocket3Everyone wants to know how high their rocket went. [Vcazan] created AltiRocket to record acceleration and altitude data. AltiRocket also transmits the data to the ground via a radio link. An Arduino Nano keeps things light. A BMP108 barometric sensor captures pressure data, which is easily converted into altitude. Launch forces are captured by a 3 Axis accelerometer. A tiny LiPo battery provides power. The entire system is only 23 grams! [Vcazan] has already flown AltiRocket, collecting data from several flights earlier this summer.

 

rocket4Finally we have [J. M. Hopkins] who is working on a huge project to do just about everything! High Power Experimental Rocket Platform includes designing and building everything from the rocket fuel, to the rocket itself, to a GPS guided parachute recovery system. [J. M. Hopkins] has already accomplished two of his goals, making his own fuel and testing nozzle designs. The electronics package to be included on the rocket is impressive, including a GPS, IMU, barometric, and temperature sensors. Data will be sent back to the ground by a 70cm transceiver. The ground station will use a high gain human-guided yagi tracking antenna with a low noise amplifier to pick up the signal.

If you want more rocketry goodness, check out our brand new rocket project list! Rocket projects move fast, if I missed yours as it streaked by, don’t hesitate to drop me a message on Hackaday.io. That’s it for this week’s Hacklet, As always, see you next week. Same hack time, same hack channel, bringing you the best of Hackaday.io!

Filed under: Hackaday Columns

Hacklet 68 – Rocket Projects

There’s just something amazing about counting down and watching a rocket lift off the pad, soaring high into the sky. The excitement is multiplied when the rocket is one you built yourself. Amateur rocketry has been inspiring hackers and engineers for centuries. In the USA, modern amateur rocketry gained popularity after Sputnik-1, continuing on through the space race. Much of this history captured in the book Rocket Boys by Homer Hickam, which is well worth a read. This week’s Hacklet is dedicated to some of the best rocketry projects on Hackaday.io!

rocket1We start with [Sagar] and Guided Rocket. [Sagar] is building a rocket with a self stabilization system. Many projects use articulated fins for this, and [Sagar] plans to add fins in the future, but he’s starting with an articulated rocket motor. The motor sits inside a gimbal, which allows it to tilt about 10 degrees in any direction. An Arduino is the brain of the system. The Arduino gathers data from a MPU6050 IMU sensor, then determines how to steer the rocket motor. Steering is accomplished with a couple of micro servos connected to the gimbal.

 

rocket2Next up is [Howie], with Homemade rocket engine. [Howie] is cooking some seriously hot stuff on his stove. Rocket candy to be precise, similar to the fuel [Homer Hickam] wrote about in Rocket Boys. This solid fuel is so named because one of the main ingredients is sugar. The other main ingredient is stump remover, or potassium nitrate. Everything is mixed and heated together on a skillet for about 30 minutes, then pushed into rocket engine tubes. It goes without saying that you shouldn’t try this one at home unless you’re really sure of what you’re doing!

 

rocket3Everyone wants to know how high their rocket went. [Vcazan] created AltiRocket to record acceleration and altitude data. AltiRocket also transmits the data to the ground via a radio link. An Arduino Nano keeps things light. A BMP108 barometric sensor captures pressure data, which is easily converted into altitude. Launch forces are captured by a 3 Axis accelerometer. A tiny LiPo battery provides power. The entire system is only 23 grams! [Vcazan] has already flown AltiRocket, collecting data from several flights earlier this summer.

 

rocket4Finally we have [J. M. Hopkins] who is working on a huge project to do just about everything! High Power Experimental Rocket Platform includes designing and building everything from the rocket fuel, to the rocket itself, to a GPS guided parachute recovery system. [J. M. Hopkins] has already accomplished two of his goals, making his own fuel and testing nozzle designs. The electronics package to be included on the rocket is impressive, including a GPS, IMU, barometric, and temperature sensors. Data will be sent back to the ground by a 70cm transceiver. The ground station will use a high gain human-guided yagi tracking antenna with a low noise amplifier to pick up the signal.

If you want more rocketry goodness, check out our brand new rocket project list! Rocket projects move fast, if I missed yours as it streaked by, don’t hesitate to drop me a message on Hackaday.io. That’s it for this week’s Hacklet, As always, see you next week. Same hack time, same hack channel, bringing you the best of Hackaday.io!

Filed under: Hackaday Columns

Build Your Own CPU? That’s the Easy Part!

You want to build your own CPU? That’s great fun, but you might find it isn’t as hard as you think. I’ve done several CPUs over the years, and there’s no shortage of other custom CPUs out there ranging from pretty serious attempts to computers made out of discrete chips to computers made with relays. Not to trivialize the attempt, but the real problem isn’t the CPU. It is the infrastructure.

What Kind of Infrastructure?

I suppose the holy grail would be to bootstrap your custom CPU into a full-blown Linux system. That’s a big enough job that I haven’t done it. Although you might be more productive than I am, you probably need a certain amount of sleep, and so you may want to consider if you can really get it all done in a reasonable time. Many custom CPUs, for example, don’t run interactive operating systems (or any operating system, for that matter). In extreme cases, custom CPUs don’t have any infrastructure and you program them in straight machine code.

Machine code is error prone so, you really need an assembler. If you are working on a big machine, you might even want a linker. Assembly language coding gets tedious after a while, so maybe you want a C compiler (or some other language). A debugger? What about an operating system?

Each one of those things is a pretty serious project all by itself (on top of the project of making a fairly capable CPU). Unless you have a lot of free time on your hands or a big team, you are going to have to consider how to hack some shortcuts.

Getting Infrastructure?

The easiest way to get infrastructure is to steal it. But that means your CPU has to be compatible with some other available CPU (like OpenSparc or OpenRisc) and what fun is that? Still, the Internet is full of clone CPUs that work this way. What good is a clone CPU? Presumably, the designer wants to use that particular processor, but wants to integrate it with other items to produce a system on a chip. Of course, sometimes, people just want to emulate an old machine, and that can be fun too.

In general, though, the appeal to developing your own CPU is to make it your own. Maybe you want to experiment with strange instruction set architectures. Perhaps you have an idea about how to minimize processor stalls. Or you could be like me and just want a computer that models the way you think better than any commercial alternative. If so, what do you do? You could try porting infrastructure. This is about midway between stealing and building from scratch.

Portable Options

There are quite a few options for portable assemblers. Assuming your processor doesn’t look too strange and you don’t mind conventional assembler conventions about labels and symbols, you might consider TDASM or  TASM. I have my own variation on this, AXASM, and I’ll talk about it more in the near future.

Assembly language is fine, but you really want a high level language. Of course, your first thought will be to port gcc, which is a great C and C++ compiler (among other things). There’s good news, bad news, and worse news. The good news is that gcc is made to be portable as long as your architecture fits some predefined notions (for example, at least 32 bit integers and a flat address space). The bad news is that it is fairly difficult to do a port. The worst news is there is only a limited amount of documentation and a lot of it is very out of date.

Still, it is possible. There are only three things you have to create to produce a cross compiler:

  • A machine description
  • A machine header
  • Some machine-specific functions

However, building these is fairly complex and uses a Lisp-like notation that isn’t always intuitive. If you want to tackle it, there are several documents of interest. There’s a very good slide show overview, very out of date official documentation, and some guy’s master’s thesis. However, be prepared to read a lot of source code and experiment, too. Then you’ll probably also want to port gdb, which is also non-trivial (see the video below).

There are other C compilers. The llvm project has clang which you might find slightly easier to port, although it is still not what I would consider trivial. The lcc compiler started out as a book in 1995. It uses iburg to do code generation, and that tool might be useful with some other retargeting projects, as well. Although the vbcc compiler isn’t frequently updated, the documentation of its backend looks very good and it appears to be one of the easier compilers to port. There is a portable C compiler, PCC, that is quite venerable. I’ve seen people port some of the “small C” variants
to a different CPU, although since they aren’t standard C, that is only of limited use.

Keep in mind, there’s more to doing a gcc port than just the C compiler. You’ll need to define your ABI (Application Binary Interface; basically how memory is organized and arguments passed). You’ll also need to provide at least some bootstrap C library, although you may be able to repurpose a lot of the standard library after you get the compiler working.

So maybe the C compiler is a bit much. There are other ways to get a high level language going. Producing a workable JVM (or other virtual machine) would allow you to cross compile Java and is probably less work overall. Still not easy, though, and the performance of your JVM will probably not be even close to a compiled program. I have found that versions of Forth are easy to get going. Jones on Forth is a good place to start if you can find a backup copy of it.

If you do bite the bullet and build a C compiler, the operating system is the next hurdle. Most Linux builds assume you have advanced features like memory management. There is a version, uClinux, that might be slightly easier to port. You might be better off looking at something like Contiki or FreeRTOS.

A Shot of Realism

Building a new CPU isn’t for the fainthearted and probably not your best bet for a first FPGA project. Sometimes, just getting a decent front panel can be a challenge (see video below), never mind compilers and operating systems.

Bootstrapping a new system to a full Linux-running monster would be a lot of work for one hacker. It might be more appropriate for a team of hackers. Maybe you can host your project on Hackaday.io.

Still, just because you can’t whip up the next 128-bit superscalar CPU on a weekend, doesn’t mean you shouldn’t try your hand at building a CPU. You’ll learn a lot and–who knows–you might even invent something new.

Filed under: computer hacks, FPGA

Build Your Own CPU? That’s the Easy Part!

You want to build your own CPU? That’s great fun, but you might find it isn’t as hard as you think. I’ve done several CPUs over the years, and there’s no shortage of other custom CPUs out there ranging from pretty serious attempts to computers made out of discrete chips to computers made with relays. Not to trivialize the attempt, but the real problem isn’t the CPU. It is the infrastructure.

What Kind of Infrastructure?

I suppose the holy grail would be to bootstrap your custom CPU into a full-blown Linux system. That’s a big enough job that I haven’t done it. Although you might be more productive than I am, you probably need a certain amount of sleep, and so you may want to consider if you can really get it all done in a reasonable time. Many custom CPUs, for example, don’t run interactive operating systems (or any operating system, for that matter). In extreme cases, custom CPUs don’t have any infrastructure and you program them in straight machine code.

Machine code is error prone so, you really need an assembler. If you are working on a big machine, you might even want a linker. Assembly language coding gets tedious after a while, so maybe you want a C compiler (or some other language). A debugger? What about an operating system?

Each one of those things is a pretty serious project all by itself (on top of the project of making a fairly capable CPU). Unless you have a lot of free time on your hands or a big team, you are going to have to consider how to hack some shortcuts.

Getting Infrastructure?

The easiest way to get infrastructure is to steal it. But that means your CPU has to be compatible with some other available CPU (like OpenSparc or OpenRisc) and what fun is that? Still, the Internet is full of clone CPUs that work this way. What good is a clone CPU? Presumably, the designer wants to use that particular processor, but wants to integrate it with other items to produce a system on a chip. Of course, sometimes, people just want to emulate an old machine, and that can be fun too.

In general, though, the appeal to developing your own CPU is to make it your own. Maybe you want to experiment with strange instruction set architectures. Perhaps you have an idea about how to minimize processor stalls. Or you could be like me and just want a computer that models the way you think better than any commercial alternative. If so, what do you do? You could try porting infrastructure. This is about midway between stealing and building from scratch.

Portable Options

There are quite a few options for portable assemblers. Assuming your processor doesn’t look too strange and you don’t mind conventional assembler conventions about labels and symbols, you might consider TDASM or  TASM. I have my own variation on this, AXASM, and I’ll talk about it more in the near future.

Assembly language is fine, but you really want a high level language. Of course, your first thought will be to port gcc, which is a great C and C++ compiler (among other things). There’s good news, bad news, and worse news. The good news is that gcc is made to be portable as long as your architecture fits some predefined notions (for example, at least 32 bit integers and a flat address space). The bad news is that it is fairly difficult to do a port. The worst news is there is only a limited amount of documentation and a lot of it is very out of date.

Still, it is possible. There are only three things you have to create to produce a cross compiler:

  • A machine description
  • A machine header
  • Some machine-specific functions

However, building these is fairly complex and uses a Lisp-like notation that isn’t always intuitive. If you want to tackle it, there are several documents of interest. There’s a very good slide show overview, very out of date official documentation, and some guy’s master’s thesis. However, be prepared to read a lot of source code and experiment, too. Then you’ll probably also want to port gdb, which is also non-trivial (see the video below).

There are other C compilers. The llvm project has clang which you might find slightly easier to port, although it is still not what I would consider trivial. The lcc compiler started out as a book in 1995. It uses iburg to do code generation, and that tool might be useful with some other retargeting projects, as well. Although the vbcc compiler isn’t frequently updated, the documentation of its backend looks very good and it appears to be one of the easier compilers to port. There is a portable C compiler, PCC, that is quite venerable. I’ve seen people port some of the “small C” variants
to a different CPU, although since they aren’t standard C, that is only of limited use.

Keep in mind, there’s more to doing a gcc port than just the C compiler. You’ll need to define your ABI (Application Binary Interface; basically how memory is organized and arguments passed). You’ll also need to provide at least some bootstrap C library, although you may be able to repurpose a lot of the standard library after you get the compiler working.

So maybe the C compiler is a bit much. There are other ways to get a high level language going. Producing a workable JVM (or other virtual machine) would allow you to cross compile Java and is probably less work overall. Still not easy, though, and the performance of your JVM will probably not be even close to a compiled program. I have found that versions of Forth are easy to get going. Jones on Forth is a good place to start if you can find a backup copy of it.

If you do bite the bullet and build a C compiler, the operating system is the next hurdle. Most Linux builds assume you have advanced features like memory management. There is a version, uClinux, that might be slightly easier to port. You might be better off looking at something like Contiki or FreeRTOS.

A Shot of Realism

Building a new CPU isn’t for the fainthearted and probably not your best bet for a first FPGA project. Sometimes, just getting a decent front panel can be a challenge (see video below), never mind compilers and operating systems.

Bootstrapping a new system to a full Linux-running monster would be a lot of work for one hacker. It might be more appropriate for a team of hackers. Maybe you can host your project on Hackaday.io.

Still, just because you can’t whip up the next 128-bit superscalar CPU on a weekend, doesn’t mean you shouldn’t try your hand at building a CPU. You’ll learn a lot and–who knows–you might even invent something new.

Filed under: computer hacks, FPGA

44 Mac Pros Racked Up to Replace Each Rack of 64 Mac Minis

We were delighted at a seeing 96 MacBook Pros in a rack a couple of days ago which served as testing hardware. It’s pretty cool so see a similar exquisitely executed hack that is actually in use as a production server.  imgix is a startup that provides image resizing for major web platforms. This means they need some real image processing horsepower and recently finalized a design that installs 44 Mac Pro computers in each rack. This hardware was chosen because it’s more than capable of doing the heavy lifting when it comes to image processing. And it turns out to be a much better use of rack space than the 64 Mac Minis it replaces.

Racking Mac Pro for Production

single-mac-pro-rack

Each of the 11 R2 panels like the one shown here holds 4 Mac Pro. Cooling was the first order of business, so each panel has a grate on the right side of it for cold-air intake. This is a sealed duct through which one side of each Pro is mounted. That allows the built-in exhaust fan of the computers to cool themselves, pulling in cold air and exhausting out the opposite side.

Port access to each is provided on the front of the panel as well. Connectors are mounted on the right side of the front plate which is out of frame in this image. Power and Ethernet run out the back of the rack.

The only downside of this method is that if one computer dies you need to pull the entire rack to replace it. This represents 9% of the total rack and so imgix designed the 44-node system to deal with that kind of processing loss without taking the entire rack down for service.

Why This Bests the Mac Mini

3 racks - Linux. Mac Min, Mac Pro
3 racks – Linux. Mac Min, Mac Pro

Here you can see the three different racks that the company is using. On the left is common server equipment running Linux. In the middle is the R1 design which uses 64 Mac Minis for graphic-intensive tasks. To the right is the new R2 rack which replace the R1 design.

Obviously each Mac Pro is more powerful than a Mac Mini, but I reached out to imgix to ask about what prompt them to move away from the R1 design that hosts eight rack panes each with eight Mac Minis. [Simon Kuhn], the Director of Production, makes the point that the original rack design is a good one, but in the end there’s just too little computing power in the space of one rack to make sense.

Although physically there is room for at least twice as many Mac Mini units — by mounting them two-deep in each space — this would have caused several problems. First up is heat. Keeping the second position of computers within safe operating temperatures would have been challenging, if not impossible. The second is automated power control. The R1 racks used two sets of 48 controllable outlets to power computers and cooling fans. This is important as the outlets allow them to power cycle mis-behaving units remotely. And finally, more units means more Ethernet connections to deal with.

We having a great time looking that custom server rack setups. If you have one of your own, or a favorite which someone else built, please let us know!

[Thanks to drw72 for mentioning R2 in a comment]

Filed under: computer hacks, internet hacks, macs hacks

44 Mac Pros Racked Up to Replace Each Rack of 64 Mac Minis

We were delighted at a seeing 96 MacBook Pros in a rack a couple of days ago which served as testing hardware. It’s pretty cool so see a similar exquisitely executed hack that is actually in use as a production server.  imgix is a startup that provides image resizing for major web platforms. This means they need some real image processing horsepower and recently finalized a design that installs 44 Mac Pro computers in each rack. This hardware was chosen because it’s more than capable of doing the heavy lifting when it comes to image processing. And it turns out to be a much better use of rack space than the 64 Mac Minis it replaces.

Racking Mac Pro for Production

single-mac-pro-rack

Each of the 11 R2 panels like the one shown here holds 4 Mac Pro. Cooling was the first order of business, so each panel has a grate on the right side of it for cold-air intake. This is a sealed duct through which one side of each Pro is mounted. That allows the built-in exhaust fan of the computers to cool themselves, pulling in cold air and exhausting out the opposite side.

Port access to each is provided on the front of the panel as well. Connectors are mounted on the right side of the front plate which is out of frame in this image. Power and Ethernet run out the back of the rack.

The only downside of this method is that if one computer dies you need to pull the entire rack to replace it. This represents 9% of the total rack and so imgix designed the 44-node system to deal with that kind of processing loss without taking the entire rack down for service.

Why This Bests the Mac Mini

3 racks - Linux. Mac Min, Mac Pro
3 racks – Linux. Mac Min, Mac Pro

Here you can see the three different racks that the company is using. On the left is common server equipment running Linux. In the middle is the R1 design which uses 64 Mac Minis for graphic-intensive tasks. To the right is the new R2 rack which replace the R1 design.

Obviously each Mac Pro is more powerful than a Mac Mini, but I reached out to imgix to ask about what prompt them to move away from the R1 design that hosts eight rack panes each with eight Mac Minis. [Simon Kuhn], the Director of Production, makes the point that the original rack design is a good one, but in the end there’s just too little computing power in the space of one rack to make sense.

Although physically there is room for at least twice as many Mac Mini units — by mounting them two-deep in each space — this would have caused several problems. First up is heat. Keeping the second position of computers within safe operating temperatures would have been challenging, if not impossible. The second is automated power control. The R1 racks used two sets of 48 controllable outlets to power computers and cooling fans. This is important as the outlets allow them to power cycle mis-behaving units remotely. And finally, more units means more Ethernet connections to deal with.

We having a great time looking that custom server rack setups. If you have one of your own, or a favorite which someone else built, please let us know!

[Thanks to drw72 for mentioning R2 in a comment]

Filed under: computer hacks, internet hacks, macs hacks

Quantum Mechanics in your Processor: Tunneling and Transistors

By the turn of the 19th century, most scientists were convinced that the natural world was composed of atoms. [Einstein’s] 1905 paper on Brownian motion, which links the behavior of tiny particles suspended in a liquid to the movement of atoms put the nail in the coffin of the anti-atom crowd. No one could actually see atoms, however. The typical size of a single atom ranges from 30 to 300 picometers. With the wavelength of visible light coming in at a whopping 400 – 700 nanometers, it is simply not possible to “see” an atom. Not possible with visible light, that is. It was the summer of 1982 when Gerd Binnig and Heinrich Rohrer, two researchers at IBM’s Zurich Research Laboratory, show to the world the first ever visual image of an atomic structure. They would be awarded the Nobel prize in physics for their invention in 1986.

The Scanning Tunneling Microscope

IBM’s Scanning Tunneling Microscope, or STM for short, uses an atomically sharp needle that passes over the surface of an (electrically conductive) object – the distance between the tip and object being just a few hundred picometers, or the diameter of a large atom.

stm
[Image Source]

A small voltage is applied between the needle and the object. Electrons ‘move’ from the object to the needle tip. The needle scans the object, much like a CRT screen is scanned. A current from the object to the needed is measured. The tip of the needle is moved up and down so that this current value does not change, thus allowing the needle to perfectly contour the object as it scans. If one makes a visual image of the current values after the scan is complete, individual atoms become recognizable. Some of this might sound familiar, as we’ve seen a handful of people make electron
microscopes
from scratch. What we’re going to focus on in this article is how these electrons ‘move’ from the object to the needle. Unless you’re well versed in quantum mechanics, the answer might just leave your jaw in the same position as this image will from a home built STM machine.

Quantum Tunneling

Quantum Mechanics is a strange world, indeed. Everyday things that we take for granted, things like cause-and-effect and elementary classical laws do not work in the world inside the atom. Particles popping in and out of existence are the norm here.

IBM logo in atoms
STMs can also relocate atoms, as IBM demonstrated with 35 xenon atoms

In fact, at the tiny scales we’re working at, particles can take on wave-like properties in a phenomenon known as complementarity, which was our topic last week. Electrons are particles. Sub atomic particles that is, which opens them up to this wave-particle duality property of nature. If we look at electrons as a particle, there is no way for them to move from the surface of our object to the needle. The resistance is too great for the small voltage to overcome. It’s what they call an energy barrier. But the electrons are obviously getting across the barrier. How? Well, if we take quantum mechanics seriously and look at the electron as a wave, it becomes possible to cross the barrier.

The Advent of Wave Mechanics

In 1926, a man by the name of [Erwin Schrödinger] published a paper describing an incredible leap forward in quantum mechanics. In fact, the label of “quantum mechanics” was not formed until after his famous paper.

graph of wave
The waveform hits the y axis barrier, but part is able to move past. “Quantum Tunnelling animation” by Yuvalr

It was just called quantum theory beforehand. [Schrodinger] realized that [Heisenberg’s] Uncertainty Principle was linked to the wave-like behavior of particles. Even though the particle and wave nature of the electron were complementary, they were still related. [Schrodinger’s] wave mechanics uses the wave nature of the electron to predict its location within a certain percentage. The higher the amplitude, the higher the probability of finding a particle. Observing the electron results in the so-called “collapse of the wave function”, and it takes on the mutually exclusive properties of a particle or wave.

It’s difficult to express in words how important this discovery was. Wave mechanics to the quantum world is analogous to [Newton’s] laws of motion to the macro world. It gave scientists the ability to predict the probable location of an electron in the atom. Many will remember the s, p, d and f orbitals from high school chemistry class. These were developed via the quantum numbers – a result of [Schrodinger’s] wave mechanics.

Quantum Tunneling can now be explained by the very small amplitude of the electron wave that moves past the energy barrier. The presence of some of the wave on the other side of the barrier represents a probability of the electron appearing. Send enough electrons, and some will appear.

The Tunneling Transistor

Quantum tunneling is not a good thing when you’re trying to shrink transistors ever so smaller. Transistors need barriers. When electrons start tunneling through these barriers, you get problems. Big problems. In fact, quantum tunneling sets a fundamental limit on how small transistors can get. If any internal barriers get thinner than a nanometer, too much current will tunnel through when the transistor is off. It might be useful, however, to design a processor to use quantum mechanics to its advantage – a quantum computer. This will be the subject of the next article.

Articles in the Quantum Mechanics in your Processor series:

Sources:

http://www.nanoscience.com/products/stm/technology-overview/tunneling/

http://www.azonano.com/article.aspx?ArticleID=1373

http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/microscope/

Chemistry: Atoms First, by Burdge Julia. Chapter 3 ISBN-9781259208416

Filed under: Hackaday Columns

Quantum Mechanics in your Processor: Tunneling and Transistors

By the turn of the 19th century, most scientists were convinced that the natural world was composed of atoms. [Einstein’s] 1905 paper on Brownian motion, which links the behavior of tiny particles suspended in a liquid to the movement of atoms put the nail in the coffin of the anti-atom crowd. No one could actually see atoms, however. The typical size of a single atom ranges from 30 to 300 picometers. With the wavelength of visible light coming in at a whopping 400 – 700 nanometers, it is simply not possible to “see” an atom. Not possible with visible light, that is. It was the summer of 1982 when Gerd Binnig and Heinrich Rohrer, two researchers at IBM’s Zurich Research Laboratory, show to the world the first ever visual image of an atomic structure. They would be awarded the Nobel prize in physics for their invention in 1986.

The Scanning Tunneling Microscope

IBM’s Scanning Tunneling Microscope, or STM for short, uses an atomically sharp needle that passes over the surface of an (electrically conductive) object – the distance between the tip and object being just a few hundred picometers, or the diameter of a large atom.

stm
[Image Source]

A small voltage is applied between the needle and the object. Electrons ‘move’ from the object to the needle tip. The needle scans the object, much like a CRT screen is scanned. A current from the object to the needed is measured. The tip of the needle is moved up and down so that this current value does not change, thus allowing the needle to perfectly contour the object as it scans. If one makes a visual image of the current values after the scan is complete, individual atoms become recognizable. Some of this might sound familiar, as we’ve seen a handful of people make electron
microscopes
from scratch. What we’re going to focus on in this article is how these electrons ‘move’ from the object to the needle. Unless you’re well versed in quantum mechanics, the answer might just leave your jaw in the same position as this image will from a home built STM machine.

Quantum Tunneling

Quantum Mechanics is a strange world, indeed. Everyday things that we take for granted, things like cause-and-effect and elementary classical laws do not work in the world inside the atom. Particles popping in and out of existence are the norm here.

IBM logo in atoms
STMs can also relocate atoms, as IBM demonstrated with 35 xenon atoms

In fact, at the tiny scales we’re working at, particles can take on wave-like properties in a phenomenon known as complementarity, which was our topic last week. Electrons are particles. Sub atomic particles that is, which opens them up to this wave-particle duality property of nature. If we look at electrons as a particle, there is no way for them to move from the surface of our object to the needle. The resistance is too great for the small voltage to overcome. It’s what they call an energy barrier. But the electrons are obviously getting across the barrier. How? Well, if we take quantum mechanics seriously and look at the electron as a wave, it becomes possible to cross the barrier.

The Advent of Wave Mechanics

In 1926, a man by the name of [Erwin Schrödinger] published a paper describing an incredible leap forward in quantum mechanics. In fact, the label of “quantum mechanics” was not formed until after his famous paper.

graph of wave
The waveform hits the y axis barrier, but part is able to move past. “Quantum Tunnelling animation” by Yuvalr

It was just called quantum theory beforehand. [Schrodinger] realized that [Heisenberg’s] Uncertainty Principle was linked to the wave-like behavior of particles. Even though the particle and wave nature of the electron were complementary, they were still related. [Schrodinger’s] wave mechanics uses the wave nature of the electron to predict its location within a certain percentage. The higher the amplitude, the higher the probability of finding a particle. Observing the electron results in the so-called “collapse of the wave function”, and it takes on the mutually exclusive properties of a particle or wave.

It’s difficult to express in words how important this discovery was. Wave mechanics to the quantum world is analogous to [Newton’s] laws of motion to the macro world. It gave scientists the ability to predict the probable location of an electron in the atom. Many will remember the s, p, d and f orbitals from high school chemistry class. These were developed via the quantum numbers – a result of [Schrodinger’s] wave mechanics.

Quantum Tunneling can now be explained by the very small amplitude of the electron wave that moves past the energy barrier. The presence of some of the wave on the other side of the barrier represents a probability of the electron appearing. Send enough electrons, and some will appear.

The Tunneling Transistor

Quantum tunneling is not a good thing when you’re trying to shrink transistors ever so smaller. Transistors need barriers. When electrons start tunneling through these barriers, you get problems. Big problems. In fact, quantum tunneling sets a fundamental limit on how small transistors can get. If any internal barriers get thinner than a nanometer, too much current will tunnel through when the transistor is off. It might be useful, however, to design a processor to use quantum mechanics to its advantage – a quantum computer. This will be the subject of the next article.

Articles in the Quantum Mechanics in your Processor series:

Sources:

http://www.nanoscience.com/products/stm/technology-overview/tunneling/

http://www.azonano.com/article.aspx?ArticleID=1373

http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/microscope/

Chemistry: Atoms First, by Burdge Julia. Chapter 3 ISBN-9781259208416

Filed under: Hackaday Columns

Mobile telematic startup Driveway puts $10M in its glove compartment

driving


Most people can use some objective advice once in a while, especially if it has to do with their driving skills — not only because it helps for better roads, but also because few people enjoy taking advice about it from their peers.

Startup Driveway Software closed a $10 million series A round to grow its mobile telematics technology, which apparently gives users “objective ratings” on their driving styles, the San Francisco-based company announced today.

Driveway created an app that uses smartphone sensors to create a 3D image of the user’s car. With no need of hardware, as used by competitors Automatic and Progressive, the app analyzes factors such as harsh braking, high speeds, or tight cornering to come up with a score for the user.

“The score is zero to 100. Based on the score… you can get objective notes of your driver style, or individual suggestions on a bit too much of hard braking,” founder and CEO Igor Batsman told VentureBeat. “It is very unique to each driver. It helps all of us to make the road safer and gradually improve our driving skills.”

The startup, which released the drivewise.ly app for iOS and Android, said that as of now, the main goal is to help users become better drivers by knowing their flaws. However, Driveway’s VP of business data and operation, Roman Glukhovsky, explained in a phone interview that the startup’s real potential lies in its data collection.

“In three years, we’ll have the value of Big Data of millions of cars,” Glukhovsky said. “The most exciting opportunity for the company is to become like a Google of driving data.”

Such data, the company said, can potentially help users decrease their insurance rates by up to 30 percent or save gas money by fixing common mistakes that increase gasoline consumption. It can also help parents watch their teenagers’ driving habits.

The interesting part of this data collection is its commercial and institutional usage.

“The most obvious [beneficiary] is insurance and fuel industry,” Glukhovsky added. “Also, we’ll be able to say that one in two are slamming their brakes at the curve in the bridge, which could [indicate] a hazard.”

The round was led by Ervington Investments, an investing arm for Russian businessman and Chelsea Football Club owner Roman Abramovich.
Driveway has raised $11.3 million so far, and it plans to double its 20-person team within the next three months. It is also said to be working with insurance companies and other prospective partners.

Mobile telematic startup Driveway puts $10M in its glove compartment

driving


Most people can use some objective advice once in a while, especially if it has to do with their driving skills — not only because it helps for better roads, but also because few people enjoy taking advice about it from their peers.

Startup Driveway Software closed a $10 million series A round to grow its mobile telematics technology, which apparently gives users “objective ratings” on their driving styles, the San Francisco-based company announced today.

Driveway created an app that uses smartphone sensors to create a 3D image of the user’s car. With no need of hardware, as used by competitors Automatic and Progressive, the app analyzes factors such as harsh braking, high speeds, or tight cornering to come up with a score for the user.

“The score is zero to 100. Based on the score… you can get objective notes of your driver style, or individual suggestions on a bit too much of hard braking,” founder and CEO Igor Batsman told VentureBeat. “It is very unique to each driver. It helps all of us to make the road safer and gradually improve our driving skills.”

The startup, which released the drivewise.ly app for iOS and Android, said that as of now, the main goal is to help users become better drivers by knowing their flaws. However, Driveway’s VP of business data and operation, Roman Glukhovsky, explained in a phone interview that the startup’s real potential lies in its data collection.

“In three years, we’ll have the value of Big Data of millions of cars,” Glukhovsky said. “The most exciting opportunity for the company is to become like a Google of driving data.”

Such data, the company said, can potentially help users decrease their insurance rates by up to 30 percent or save gas money by fixing common mistakes that increase gasoline consumption. It can also help parents watch their teenagers’ driving habits.

The interesting part of this data collection is its commercial and institutional usage.

“The most obvious [beneficiary] is insurance and fuel industry,” Glukhovsky added. “Also, we’ll be able to say that one in two are slamming their brakes at the curve in the bridge, which could [indicate] a hazard.”

The round was led by Ervington Investments, an investing arm for Russian businessman and Chelsea Football Club owner Roman Abramovich.
Driveway has raised $11.3 million so far, and it plans to double its 20-person team within the next three months. It is also said to be working with insurance companies and other prospective partners.