Moritz v. Sivers – Hackaday https://hackaday.com Fresh hacks every day Wed, 01 Dec 2021 20:14:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 156670177 Listening to the Sounds of an 1960s Military Computer https://hackaday.com/2021/12/01/listening-to-the-sounds-of-an-1960s-military-computer/ https://hackaday.com/2021/12/01/listening-to-the-sounds-of-an-1960s-military-computer/#comments Thu, 02 Dec 2021 06:00:29 +0000 https://hackaday.com/?p=473906 Restoring vintage computers is the favorite task of many hardware hackers. Retrocomputing probably makes you think of home computer brands like Commodore, Amiga, or Apple but [Erik Baigar] is deeply …read more]]>

Restoring vintage computers is the favorite task of many hardware hackers. Retrocomputing probably makes you think of home computer brands like Commodore, Amiga, or Apple but [Erik Baigar] is deeply into collecting early military computers from the UK-based Elliott company. Earlier this year he made a detailed video that shows how he successfully brought an Elliott 920M from the 1960s back to life.

It is quite amazing that the Elliott company already managed to fit their 1960s computer into a shoebox-sized footprint. As computers had not yet settled on the common 8bit word size back then the Elliott 900 series are rather exotic 18bit or 12bit machines. The 920M was used as a guidance computer for European space rockets in the 1960s and ’70s but also for navigational purposes in fighter jets until as late as 2010.

Opening up the innards of this machine reveals some exotic quirks of early electronics manufacturing. The logic modules contain multilayer PCBs where components were welded instead of soldered onto thin sheets of mylar foil that were then potted in Araldite.

To get the computer running [Erik Baigar] first had to recreate the custom connectors using a milling machine. He then used an Arduino to simulate a paper tape reader and load programs into the machine. An interesting hack is when he makes the memory reading and writing audible by simply placing a radio next to the machine. [Erik Baigar] finishes off his demonstration of the computer by running some classic BASIC games like tic-tac-toe and a maze creator.

If you would like to code your own BASIC programs on more modern hardware you should check out this BASIC interpreter for the Raspberry Pi Pico.

Video after the break.

 

]]>
https://hackaday.com/2021/12/01/listening-to-the-sounds-of-an-1960s-military-computer/feed/ 44 473906 _13-26 screenshot
The Mysterious Wobble of Muons https://hackaday.com/2021/05/11/the-mysterious-wobble-of-muons/ https://hackaday.com/2021/05/11/the-mysterious-wobble-of-muons/#comments Tue, 11 May 2021 17:00:29 +0000 https://hackaday.com/?p=473656 You might think that particle physicists would be sad when an experiment comes up with different results than their theory would predict, but nothing brightens up a field like unexplained …read more]]>

You might think that particle physicists would be sad when an experiment comes up with different results than their theory would predict, but nothing brightens up a field like unexplained phenomena.  Indeed, particle physicists have been feverishly looking for deviations from the Standard Model. This year, there have been tantalizing signs that a long unresolved discrepancy between theory and experiment will be confirmed by new experimental results.

In particular, the quest to measure the magnetic moment of muons started more than 60 years ago, and this has been measured ever more precisely since. From an experiment in 1959 at CERN in Switzerland, to the turn of the century at Brookhaven, to this year’s result at Fermilab, the magnetic moment of the muon seems to be at odds with theoretical predictions.

Although a statistical fluke is basically excluded, this value also relies on complex theoretical calculations that are not all in agreement. Instead of heralding a new era of physics, it might just be another headline too good to be true. But some physicists are mumbling “new particle” in hushed tones. Let’s see what all the fuss is about.

The Electron’s Big Brother

The muon is often called the big brother of the electron: it is an elementary particle with a negative charge, and it is about 200 times heavier than its little brother. Muons are omnipresent because they are produced when cosmic rays smash into the atoms of our atmosphere. If you hold out your hand there is probably a muon going through it every second. To study muons, they can also be artificially produced using particle accelerators that mimic the reactions occurring in our atmosphere. The generated muon beam can then be circulated and stored in a big magnetic ring.

Muons have a spin and charge that creates a magnetic moment, which measures the strength and orientation of its magnetic field. You can imagine muons like tiny permanent magnets whose orientation aligns when they are placed in an external magnetic field. The “g-factor” is a dimensionless proportionality constant between the magnetic moment and the spin.

Using some fancy math, one can calculate the g-factor for muons (or electrons) from the Dirac equation, which gives a value of exactly 2. But this is not the whole truth: in quantum electrodynamics (QED),  “virtual” particles can pop in and out of existence, so long as they do so quickly enough that they fall under the cover of the Heisenberg uncertainty principle. This in-and-out-of-existance loop leads to correction terms in the calculations. For the muon, loop corrections lead to a g-factor that is not exactly 2 but 2.00233183620, and the difference is referred to as the anomalous magnetic moment of the muon.

All You Need Is a Giant Magnet

Experimental setup of the g-2 experiment at CERN. Muons enter the magnet at the lower left and then propagate in a spiral path to the right where they exit and are analyzed.
Credit: G. Charpak et al.

In 1959, CERN decided to run an experiment that should test the validity of quantum electrodynamics, which predicted the anomalous magnetic moment of the muon. Ignoring the usual tradition to come up with a creative acronym, the experiment was simply called “g-2” (pronounced “gee minus two”).

In the experiment, a beam of protons from CERN’s synchrocyclotron was shot onto a target to produce a beam of pions that immediately decay into muons. This muon beam then entered a six meter long magnet.

The magnetic field was oriented vertically to the muon beam which caused the muons to curve into a circular path. The magnetic field also varied from the left to the right, which was achieved by carefully inserting exactly calculated shims into the magnet. This caused the muons to slowly drift from left to right making a spiral curve.

In this magnetic field, the spin of the muons is wobbling (precessing) just like the spin of protons in an MRI machine. At the right end of the magnet, the muons are ejected and the direction of their spin relative to their momentum is analyzed. From this measurement, the anomalous magnetic moment can be calculated because it is sensitive to the difference between the orbital frequency and the spin precession frequency. Only six months after the start, the experiment came up with a result of g = 2.001165±5 which agreed well with the theoretical value at that time and thus confirmed the validity of QED. In the coming years, a second experiment with 25 times better accuracy actually found a difference between theory and measurement, but this vanished after theorists refined their models. A third and final experiment at CERN confirmed this new theoretical results with an astonishing accuracy of 0.0007% (7 ppm).

The Tension Between Theory and Experiment

In 1984, the US took over in investigating the muon anomalous magnetic moment. Using the proton accelerator at Brookhaven National Laboratory (BNL) their experiment introduced several improvements over the CERN measurements. These included a higher proton intensity, a 14 m diameter superconducting magnet — the largest in the world at that time —  more efficient muon injection, and electrostatic focusing of the muon beam as well as custom-made, 400 MHz waveform digitizers. The aim of the experiment was to achieve an accuracy of 0.35 ppm in order to check the loop corrections caused by W and Z-bosons which were discovered the year before at CERN.

To obtain an objective result the collaboration used a “blind analysis”. This is a common technique used in particle physics to avoid the unintended bias towards a certain result by the people analyzing the data. It is similar to the double-blind randomized clinical trial used in medical research. In the BNL experiment, the data analysis to obtain the two frequencies from which g-2 could be calculated was carried out by two different teams. In addition, each frequency had an artificially introduced offset which was unknown to the data analysis team and was only subtracted after the result had been fixed.

When the final data-taking run ended in 2001 the combined result was in disagreement with theory by 2.2 – 2.7 standard deviations depending on the theoretical calculation. This caused much discussion of whether it could be a hint towards new physics because yet undiscovered particles would lead to additional correction terms to the theoretically calculated value. As with every other result, there is of course the less spectacular possibility that it is merely a statistical fluctuation which at 2.7 standard deviations is unlikely, but as history has proven not at all impossible. Another boring explanation would be some unaccounted systematic error in the experiment.

And the theoretical value is not itself bulletproof. This is because there are large corrections from particle loops that involve strongly interacting particles that cannot be calculated from theory. Instead, theorists use measured production rates from other experiments for these particles to approximate the correction term.

An Eagerly Awaited Result

Summary of the theoretical and experimental values for g-2. The theoretical consensus value is in tension with the experimental results while new lattice calculations agree with their uncertainties.
Credit: V. ALTOUNIAN/SCIENCE

In 2013, the superconducting magnet of the BNL experiment was transported 3,200 miles to Fermilab near Chicago in order to repeat the experiment using their more intense muon beam. The result was eagerly awaited by the physics community as the tension between theory and experiment had been standing unresolved for the last 20 years.

A few weeks ago Fermilab published their first result that was again obtained by a blind analysis where the frequency of their main clock was entrusted to two other physicists outside of the collaboration. After unblinding the data, it became clear that the result confirms the BNL measurement.

The combined tension between theory and experiment amounts now to 4.2 standard deviations which is just a bit short of the 5 standard deviation threshold to claim a new discovery. Still, the difference between the theoretical and experimental values is large enough that the chances of it happening randomly is about 1 to 40,000. An experimental screw-up is also unlikely, now that the result is confirmed by two independent experiments, even though they use the same technique and some of the same equipment.

Ironically, the theoretical value might be to blame. In fact, on the same day that Fermilab published their result, a new theory paper published in Nature, but already available since last year as a preprint, arrives at a value that is actually compatible with the experimental data. This new theoretical value was completely calculated from scratch using so-called lattice calculations running on a supercomputer. However, the result is far from the theoretical consensus value and has still to be confirmed by other independent calculations. But as mentioned earlier it already happened before that the tension between theory and experiment vanished after theorists reevaluated their model. When theory and experiment hone each other, they both become sharper.

So it is still unclear if the current Standard Model of particle physics has finally been cracked open. Many people doubt that the g-2 result is due to a new particle, because we should have already seen it in current particle colliders like the LHC. On the other hand, it could be just around the corner, to be found by current or near-future colliders. In the meantime, Fermilab is already busy analyzing some of their more recent data, and is still continuing data taking so we can expect a more accurate g-2 value soon. The topic is definitely one of the most exciting in particle physics right now and it is worth keeping an eye on it.

]]>
https://hackaday.com/2021/05/11/the-mysterious-wobble-of-muons/feed/ 8 473656 AtomSmashing
How Fast is the Universe Expanding? The Riddle of Two Values for the Hubble Constant https://hackaday.com/2021/03/25/how-fast-is-the-universe-expanding-the-riddle-of-two-values-for-the-hubble-constant/ https://hackaday.com/2021/03/25/how-fast-is-the-universe-expanding-the-riddle-of-two-values-for-the-hubble-constant/#comments Thu, 25 Mar 2021 14:01:42 +0000 https://hackaday.com/?p=459390 In the last decades, our understanding of the Universe has made tremendous progress. Not long ago, “precision astronomy” was thought to be an oxymoron. Nowadays, satellite experiments and powerful telescopes …read more]]>

In the last decades, our understanding of the Universe has made tremendous progress. Not long ago, “precision astronomy” was thought to be an oxymoron. Nowadays, satellite experiments and powerful telescopes on earth were able to measure the properties of our Universe with astonishing precision. For example, we know the age of the Universe with an uncertainty of merely 0.3%, and even though we still do not know the origin of Dark Matter or Dark Energy we have determined their abundance with a precision of better than 1%.

There is, however, one value that astronomers have difficulty in pinning down: how fast our universe is expanding. Or, more precisely, astronomers have used multiple methods of estimating the Hubble constant, and the different methods are converging quite tightly on two different values! This clearly can’t be true, but nobody has yet figured out how to reconcile the results, and further observations have only improved the precision, deepening the conflict. It’s likely that we’ll need either new astronomy or new physics to solve this puzzle.

The Discovery of the Expanding Universe

In the 1920s Edwin Hubble used the newly built telescope at Mount Wilson Observatory to study fuzzy objects known as nebulae. Back then, astronomers were arguing whether these nebulae are clouds of stars within our Milky Way or if they are whole different galaxies. Hubble discovered stars within these nebulae whose brightness slowly fades in and out. These were known as Cepheids and previously studied by Henrietta Levitt who showed that there was a tight relationship between the star’s intrinsic brightness and the period of its variation. This means Cepheids could be used as so-called standard candles which refers to objects whose absolute brightness is known. Since there is a simple relationship between how the brightness of an object decreases with distance, Hubble was able to calculate the distance of the Cepheids by comparing their apparent and intrinsic brightness. He showed that the Cepheid stars were not located within our galaxy and that nebulae are actually distant galaxies.

Hubble also measured the velocity at which these distant galaxies are moving away from us by observing the redshifts of spectral lines caused by the Doppler effect. He found that the further away the galaxy is located, the faster it is moving away from us described by a simple linear relationship.

\bf v = H_0 d

The parameter H0 is what is known as the Hubble constant. Later the Belgian priest and physicist Georges Lemaître realized that the velocity-distance relationship measured by Hubble was evidence for the expansion of the Universe. Since the expansion of space itself causes other galaxies to move away from us we are not in any privileged location but the same effect would be measured from any other place in the Universe. An effect that is sometimes illustrated by drawing points on a balloon, when it is inflated the points move away from each other at a speed that depends on their distance. It is also better not to think of the cosmological redshift as being caused by a real velocity as the parameter v in the above equation can easily exceed the speed of light.

Since astronomic distances are commonly measured in Megaparsec (Mpc), which is equal to 3.26 million light-years, the Hubble constant is expressed in (km/s)/Mpc. The value of H0 is about 70 (km/s)/Mpc which can also be expressed as 7%/Gyr, meaning that the distance between two objects will increase by 7% after a billion years.

The Hubble Constant is Not Constant

Even though we speak of the Hubble constant it is a bit of a misnomer since its value is changing over time. We call this the Hubble parameter H(t) while H0 is simply the value of H(t) today. We now know that the expansion of the Universe is accelerating, so what does this mean for the Hubble constant? One might think that it will get bigger but actually it is decreasing, which can be shown with a little bit of math. We can express the Hubble parameter using the distance between two points d(t) and its time derivative \dot{d}(t) :

\bf H(t) = \dot{d}(t)/d(t)

If we have an accelerated expansion \dot{d}(t) \propto t , we get d(t) \propto t^2 and thus H(t) \propto t^{-1} . This means H(t) is decreasing with time. The velocity of any galaxy will increase over time because it is further away. If we look at a fixed distance, however, the velocities of different galaxies that will pass by this point will decrease over time.

How do we actually know that we live in an accelerating Universe? The proof for this came from the measurement of the redshift of distant supernovae made in the late 1990s. Similar to Cepheids, supernovae of Type 1a can be used as standard candles (i.e. their distance can be derived from their apparent brightness). Since exploding stars are generally very bright objects they can be seen from very far away.

Looking at very distant supernovae also means looking far into the past, so when the Hubble constant is changing it will have had a different value when the light from that supernovae started traveling towards us. When plotting the distance vs the redshift of supernovae one will thus see a deviation from the linear relation of the Hubble–Lemaître law for high redshifts. In the 1990s, astronomers expected to see evidence for a decelerating Universe as they thought the expansion should be slowed down due to the gravitational pull exerted by matter. Surprisingly, they found an accelerated expansion which was evidence for another form of matter or energy that acts repulsively.

Einstein originally introduced such a force to his equations of General Relativity known as the cosmological constant, denoted by the Greek letter Λ (Lambda). Ironically, it was introduced to generate a static Universe so Einstein abandoned the idea (“my biggest blunder“) when Hubble discovered the expansion of the Universe. Later the term Dark Energy was coined for the force that drives the accelerated expansion.

The Echo of the Big Bang

How do we tell how far away other stars are anyway? Astronomers have constructed a cosmic distance ladder that successively increases distance measurements using different methods. At the base of the ladder are nearby stars whose distance can be directly determined through measurements of the parallax — the apparent shift of an object’s position due to a change in an observer’s point of view. This measurement can then be used to calibrate the distance of Cepheids which then are used to calibrate the distance to Type 1a supernovae, which have brightnesses that depend on other physical properties.

Besides the distance ladder measurements described above, there are also other ways to determine the Hubble constant. One of the most precise measurements of the properties of our Universe comes from the observation of the cosmic microwave background radiation (CMB). The CMB was accidentally discovered by the radio-astronomers Penzias and Wilson after they ruled out that the signal they saw was caused by pigeons nesting in their antenna. This omnipresent source of electromagnetic radiation that peaks in the microwave region was created about 380,000 years after the Big Bang. Before that, the Universe was an opaque plasma as light did constantly bounce off free electrons and protons. Once the plasma had cooled down to about 3000K, electrons combined with protons to form neutral hydrogen atoms and light could travel freely thus the Universe became transparent.

This light which has been redshifted due to the expansion of the Universe can now be observed as the CMB. Since CMB photons are moving freely after they were last scattered they contain a snapshot of the Universe as it looked 380,000 years after the Big Bang. Through measurements of the CMB and the comparison with cosmological models, it is thus possible to extract important parameters like the aforementioned amount of Dark Matter and Dark Energy, or the Hubble constant.

Is the Universe Expanding Faster Than It Should?

Beginning at left, astronomers use the NASA/ESA Hubble Space Telescope to measure the distances to Cepheid variables by their parallax. Once the Cepheids are calibrated, astronomers move beyond our Milky Way to nearby galaxies (shown at centre). They look for Cepheid stars in galaxies that recently hosted Type Ia supernovae and use the Cepheids to measure the luminosity of the supernovae. They then look for supernovae in galaxies located even farther away.
Credit: NASA, ESA, A. Feild (STScI), and A. Riess (STScI/JHU), CC BY 4.0

Currently, the most precise measurement of the CMB was performed by the Planck satellite. Its observation agrees with the current cosmological standard model, the ΛCDM model, where Λ stands for Dark Energy in form of the cosmological constant and CDM for Cold Dark Matter. The Hubble constant derived from the Planck measurement is H0 = (67.4 ± 0.5)  km/s/Mpc.

However, distance ladder measurements give a value that is about 10% higher. The most precise value, in this case, was derived by the SH0ES team which used the known distance of nearby Cepheids in the Milky Way and Large Magellanic Cloud to calibrate the distance of extragalactic Type 1a supernovae as illustrated in the picture. Compared to the Planck measurement they arrive at a significantly higher value of H0 = (74.03 ± 1.42) km/s/Mpc. The tension between these two values is 4.4 standard deviations which corresponds to a probability of <0.001% as being due to chance.

The difference between distance ladder-based measurements of H0 and the value derived by CMB and BAO measurements. The arrows indicate how the value of H0 would be altered by new physics.
Credit: A. G. Riess, et al.

Of course, many people have tried to pin this discrepancy down to any unaccounted errors in either of the experiments but without success. Also, the discrepancy is not only between these two experiments but there are other distance ladder measurements that all point to a higher value of H0.

Making the whole situation even stranger, the CMB measurement by Planck has recently been confirmed by the Atacama Cosmology Telescope which measured a Hubble constant that is consistent with Planck’s value. In addition, CMB measurements are backed by observations of so-called baryon acoustic oscillations (BAO) combined with other astronomical data. In general, one can observe the trend that the values of H0 derived from the early Universe (CMB, BAO) are lower than those from the distance ladder measurements which use objects with much lower redshift and thus capture a more recent state of the Universe.

An important point is that the CMB measurement is model-dependent meaning that H0 is derived under the assumption that the ΛCDM model describes our Universe. So an exciting explanation for the discrepancy would be new physics beyond the current cosmological standard model. Among the many new physics interpretations for the H0 discrepancy is the idea that Dark Energy is not simply a constant but also time-dependent. Other theories include interacting Dark Matter or new relativistic particles. However, as can be seen in the figure none of these ideas can completely resolve the Hubble tension.

New Techniques to Measure Cosmic Expansion

Other techniques for determining the Hubble constant include the measurement of gravitational-lensing time delays. Strong gravitational lenses can create multiple images of an object located behind them. Since the images have different light paths there is also a time delay at which these images arrive which can be measured when the object varies in brightness. By modeling the gravitational potential of the lens and knowing the redshift of both the lens and the source, it is possible to extract the Hubble constant from this time-delay measurement. The H0LICOW (H0 Lenses in COSMOGRAIL’s Wellspring –astrophysicists have a weakness for wacky acronyms) collaboration has recently used this method to determine the Hubble constant, and their value is consistent with the distance ladder measurements and in tension with the CMB result.

In the future, completely independent measurements of the Hubble constant may shed more light on this mystery. One of them is the use of gravitational waves “standard sirens”. In this case, the absolute distance can be directly determined from the gravitational-wave measurement while the redshift is determined from simultaneous observation of electromagnetic radiation. The advantage of using gravitational waves is that the absolute distance of the source can be directly determined without any intermediate distance measurements. So any systematic error that we might have in the cosmic distance ladder will not influence the result.

The method was used to extract the Hubble constant from the gravitational-wave events GW170817 and GW190521, however, due to the large error bars, the results are consistent with both CMB and distance ladder measurements. Fortunately, the uncertainty will shrink as more and more gravitational-wave events are detected and so we will likely be able to tip the favor for either the high or low H0 value in coming years.

On the one hand, the Hubble tension is an annoying inconsistency in our otherwise well-confirmed understanding of the Universe. On the other hand, it might be an exciting glimpse of new physics. So let’s keep the hopes up that future observations will solve this puzzle and lead to new revelations.

]]>
https://hackaday.com/2021/03/25/how-fast-is-the-universe-expanding-the-riddle-of-two-values-for-the-hubble-constant/feed/ 44 459390 Expand
A Brief History of Optical Communication https://hackaday.com/2021/02/18/a-brief-history-of-optical-communication/ https://hackaday.com/2021/02/18/a-brief-history-of-optical-communication/#comments Thu, 18 Feb 2021 18:00:33 +0000 https://hackaday.com/?p=456059 We live in the information age where access to the internet is considered a fundamental human right. Exercising this right does largely rely on the technological advances made in optical …read more]]>

We live in the information age where access to the internet is considered a fundamental human right. Exercising this right does largely rely on the technological advances made in optical communication. Using light to send information has a long history: from ancient Greece, through Claude Chappe’s semaphore towers and Alexander Graham Bell’s photophone, to fiber optic networks and future satellite internet constellations currently developed by tech giants.

Let’s dive a little bit deeper into the technologies that were used to spread information with the help of light throughout history.

Semaphores and Heliographs

Reconstruction of a hydraulic telegraph at Thessaloniki Technology Museum. Credit: Gts-tg, CC BY-SA 4.0

Since light can travel in air much further than sound, visual communication has always been the method of choice to broadcast information over long distances. One of the earliest examples is the Phryctoriae from ancient Greece, a system of towers built on mountaintops that could send messages by lighting torches. Allegedly, this is how the news of the fall of Troy was spread throughout the country. The Greeks came up with different methods of encoding messages. One was to have two groups of five torches where each torch would represent the row and column in a 5×5 matrix of Greek letters known as the Polybius square. The other is the hydraulic telegraph which consisted of a container filled with water and a vertical rod floating within. The rod was inscribed with various messages along its height. When the remote torch signal was received, water from the container was slowly drained until the torch went out again. Through the position of the inscribed rod, the water level could be correlated with a specific message.

Semaphore towers and coding scheme devised by Claude Chappe. Credit: Govind P. Agrawal, Public Domain

In the late 18th century the Chappe brothers devised and erected a network of semaphore towers in France for military communication. On the top of each tower was a semaphore comprised of two movable wooden arms connected by a crossbar. By adjusting the angle of each arm and the crossbar a total of 196 symbols could be displayed which were observed from the next tower with a telescope. By waiting for the downline station to copy the symbol, the communications protocol already included an ACK signal as a means of flow control. In terms of data rate, the system could reach about 2-3 symbols per minute; taking about two minutes for a symbol to travel from Paris to Lille over 22 stations and 230 km.

In the late 19th and early 20th century, the heliograph was widely used for military communication. It consisted of a mirror that could be pivoted or blocked with a shutter to generate flashes of sunlight and was mostly used to transmit Morse code. Even though the heliograph was rendered obsolete by most armies in the 1940s it was still used by Afghan forces during the Soviet invasion in the 1980s and is still included in many survival kits for emergency signaling.

Bell’s Greatest Invention

Illustration of the transmitter part of the photophone. Credit: Wikimedia Commons, Public Domain.

Many of you probably know the kind of DIY projects where an audio signal is transmitted by a laser beam which is surprisingly easy to build. The invention goes back to Alexander Graham Bell, who in 1880 invented the photophone which he thought to be his “greatest invention ever made, greater than the telephone”. It could transmit speech wirelessly using a flexible mirror mounted at the end of a speaking tube to modulate the intensity of the reflected sunlight. The receiver part consisted of a selenium photocell at the focus of a parabolic mirror. Bell and his assistant Tainter also build nonelectric receivers using materials coated with lampblack thereby discovering the photoacoustic effect. Even though Bell was immensely proud of his invention up to the point where he wanted to name his second daughter “photophone”, the device never really hit it off. This was mainly because radio wave transmissions as pioneered by Marconi a few years later far surpassed the distance achievable with light and did not require a direct line of sight.

Guiding Light Through Glass

Increase of the bandwidth-distance product throughout history. The squares mark the introduction of new technologies like wavelength-division multiplexing (WDM) and space-division multiplexing (SDM). Credit: Govind P. Agrawal

Apart from a few military projects, telecommunication in the 20th century was mainly conducted via coaxial cables and microwave signals in the relatively low frequency 1-10 GHz range. This was until the development of fiber-optic communications in the 1970s, which was enabled by the invention of low-loss optical fibers and semiconductor lasers.

The main drawback of high-speed communication via coaxial cables is that signals have to be repeated about every kilometer to make up for cable losses. With wireless radio frequency (RF) communication, the repeater spacing can be a lot larger, but in both cases, the bandwidth is limited to ~100 Mbit/s due to the “low” frequency of the RF carrier.

The frequency of visible and infrared light is about 1014 Hz, much higher than the 109 Hz “Gigahertz” frequencies used for RF communication. As a consequence, the optical spectrum is about 2600 times wider, in terms of frequency, than the entire RF spectrum. This broader bandwidth enables much higher data rates.

One of the first applications of fiber optics included the control of short-range missiles through a fiber optic tether attached to the back of the missile that rapidly unspooled during flight. In 1977, General Telephone and Electronics sent the world’s first live telephone traffic through a fiber-optic system at 6 Mbit/s. Today, the worldwide optical fiber network is estimated to span more than 400 million kilometers, close to three times the distance to the sun.

Optical fiber communication soon far surpassed the transmission speeds of RF communication and was further boosted by multiplexing techniques like wavelength-division multiplexing (sending multiple wavelengths down the same fiber), time-division multiplexing (separating signals by their arrival times), or space-division multiplexing (using multi-core or multimode fibers). Using a combination of these techniques, data rates of up to 11 Pbit/s have been demonstrated in the lab. The low light losses of 0.2 dB/km (i.e. the intensity loses only around 5% after 1 km) in modern fiber cables enable repeater spacings of ~80 km.

Internet From Your Light Bulb

We still mostly use the RF spectrum for wireless communication, but there has been some renewed interest in wireless optical. At short distances, this goes under the catchy name LiFi and became a trendy topic about 10 years ago, partly triggered by this TED talk. It advertised the idea of using the already existing infrastructure of regular LED lighting for data transmission.

Some of the advantages being that it is more efficient, more secure against eavesdropping, and enables higher bandwidths. However, the idea of having your WiFi at home transmitted through your light bulbs never really became popular. Arguably one of the reasons might be that having a connection that depends on light shining onto your device is not always considered an advantage. Up to now, LiFi is only used in some industrial applications where electromagnetic interference or security are important issues. But lower bandwidth versions are prime areas for hacking.

Going Long Range

The optical transmission of data over long ranges goes under the name free-space optical communication (FSO). You may remember Facebook’s Aquila drone program, a giant solar-powered vehicle that should stay in the stratosphere for months to beam internet to remote areas. In addition to standard GHz frequency bands for air-to-ground communication, they were also experimenting with free-space optical links. The technology behind this is still similar to Bell`s photophone, although we now use IR lasers instead of sunlight. Shortly after Facebook canceled its Aquila drone program in 2018, it became public that they are working on a similar system that uses satellites instead of drones due to technical difficulties. In September 2020, Facebook’s subsidiary PointView Tech launched the Athena satellite which is supposed to test a laser ground link.

Google (or Alphabet if you prefer) is/was working on similar projects called Loon and Taara. Maya Posh just wrote a more detailed article about Loon. Its goal was to send a network of high-altitude balloons into the stratosphere providing internet access to underserved areas, but the project was shut down a few weeks ago. Within project Loon a 155 Mbit/s laser communication between two balloons more than 100 km apart was achieved. Project Taraa builds on this success and aims at developing towers that use free-space laser communication to deliver 20 Gbit/s connectivity over distances of 20 km. Compared to installing fiber optic cables, this would be a cost-effective and quickly deployable way to bring high-speed connectivity to remote areas.

Transmitter-receiver pair of the open-source project Ronja. Credit: Twibright Labs

Similar systems are already commercially available by a company called Koruza delivering up to 10 Gbit/s, albeit over a modest range of 150 m. Of course, hackers have also played around with the technology. Way back in 2001, the open-source project Ronja provides instructions to build a low-cost transmitter-receiver pair capable of 10 Mbit/s communication over a 1.4 km range. As a transmitter it just uses a standard red LED collimated by large lenses salvaged from magnifying glasses. Ronja works in most weather conditions including rain and snow but fails during fog.

Artist rendering of inter-satellite link via laser communication. Credit: Mynaric

This marks one of the major downsides of FSO. While requiring a direct line of sight makes communication more secure, it also imposes some restrictions. Cloudy weather can make satellite-to-ground communications break up, so microwave signals are considered more viable in this case. However, future internet satellite constellations like SpaceX’s Starlink, OneWeb, or Amazon’s Project Kuiper are likely to use laser communication as a secure, high bandwidth link in between satellites. At the forefront of developing this hardware are the German companies Tesat and Mynaric. Both companies offer laser systems with data rates of up to 10 Gbit/s between LEO satellites and ground stations. For inter-satellite linking Tesat’s laser systems can achieve 1.8 Gbit/s between geosynchronous satellites up to 80,000 km apart while Mynaric’s laser communication products carry 10 Gbit/s over distances up to 8,000 km.

The advancement of optical communication from the ancient Phryctoriae to modern laser communication was driven by the goal to expand humanity’s interconnectedness. Since the beginning, the communication data rate has increased by ~12 orders of magnitude and culminated in a space race to provide global broadband access via satellite networks. Bringing internet access to underserved areas is certainly a noble goal but we may also question the meaningfulness of enabling ever-higher bandwidths when it is mostly devoured by video streaming. Although there is no strict fundamental limit to the bitrate achievable with optical communication it also interesting to ask the question of what comes beyond, perhaps neutrino communication?

]]>
https://hackaday.com/2021/02/18/a-brief-history-of-optical-communication/feed/ 44 456059 Optical
4-bit Retrocomputer Emulator Gets Custom PCB https://hackaday.com/2020/12/23/4-bit-retrocomputer-emulator-gets-custom-pcb/ https://hackaday.com/2020/12/23/4-bit-retrocomputer-emulator-gets-custom-pcb/#comments Thu, 24 Dec 2020 06:00:56 +0000 https://hackaday.com/?p=453501 It might be fair to suspect that most people who are considered digital natives have very little to no clue about what is actually going on inside their smartphones, tablets, …read more]]>

It might be fair to suspect that most people who are considered digital natives have very little to no clue about what is actually going on inside their smartphones, tablets, and computers. To be fair, it is not easy to understand how modern CPUs work but this was different at the beginning of the 80s when personal computers just started to become popular. People who grew up back then might have a much better understanding of computer basics thanks to computer education systems. The Busch 2090 Microtronic Computer System released in 1981 in Germany was one of these devices teaching people the basics of programming and machine language. It was also [Michael Wessel]’s first computer and even though he is still in proud possession of the original he just recently recreated it using an Arduino.

The original Microtronic was sold under the catchy slogan “Hobby of the future which has already begun!” Of course, the specs of the 4-bit, 500 kHz TMS 1600 inside the Microtronic seem laughable compared to modern microcontrollers, but it did run a virtual environment that taught more than the native assembly. He points out though that the instruction manual was exceptionally well written and is still highly effective in teaching students the basics of computer programming.

Already, a couple of years back he wrote an Arduino-based Microtronic emulator. In his new project, he got around to extending the functionality and creating a custom PCB for the device. The whole thing is based on ATMega 2560 Pro Mini including an SD card module for file storage, an LCD display, and a whole bunch of pushbuttons. He also added an RTC module and a speaker to recreate some of the original functions like programming a digital clock or composing melodies. The device can also serve as an emulator of the cassette interface of the original Microtronic that allowed to save programs with a whopping data rate of 14 baud.

He has certainly done a great job of preserving this beautiful piece of retro-tech for the future. Instead of an Arduino, retro computers can also be emulated on an FPGA or just take the original hardware and extend it with a Raspberry Pi.

]]>
https://hackaday.com/2020/12/23/4-bit-retrocomputer-emulator-gets-custom-pcb/feed/ 4 453501 3284721608610543050
Space is Radioactive: Dealing with Cosmic Rays https://hackaday.com/2020/12/08/space-is-radioactive-dealing-with-cosmic-rays/ https://hackaday.com/2020/12/08/space-is-radioactive-dealing-with-cosmic-rays/#comments Tue, 08 Dec 2020 18:00:03 +0000 https://hackaday.com/?p=449993 Outer space is not exactly a friendly environment, which is why we go through great lengths before we boost people up there. Once you get a few hundred kilometers away …read more]]>

Outer space is not exactly a friendly environment, which is why we go through great lengths before we boost people up there. Once you get a few hundred kilometers away from our beloved rocky planet things get uncomfortable due to the lack of oxygen, extreme cold, and high doses of radiation.

Especially the latter poses a great challenge for long-term space travel, and so people are working on various concepts to protect astronauts’ DNA from being smashed by cosmic rays. This has become ever more salient as NASA contemplates future manned missions to the Moon and Mars. So let’s learn more about the dangers posed by galactic cosmic rays and solar flares.

Radiation from Space

When German Jesuit priest and physicist Theodor Wulf climbed the Eiffel Tower in 1910 with an electrometer he wanted to show that natural sources of ionizing radiation are originating from the ground. And while readings on top of the tower were lower than the ground level, they were still much higher than would be expected if the ground were the only source. A few years later, Victor Hess undertook several risky balloon flights up to altitudes of 5.3 km to systematically measure the radiation level by observing the discharge rate of an electroscope. His experiments showed that radiation levels increase above  ~1 km, and he correctly concluded that there must be some source of radiation penetrating the atmosphere from outer space.

The cosmic rays that rain down on Earth are predominantly protons (~90%), helium nuclei (~9%), and electrons (~1%). In addition, there is a small fraction of heavier nuclei and a tiny whiff of antimatter. These particles are blasted to us from the Sun and other galactic and extragalactic natural particle accelerators like supernovae or black holes.

How Earth Keeps Us Safe

Van Allen Belt and location of South Atlantic Anomaly
Credit: Marko Markovic, public domain

Luckily for us, the Earth’s magnetic field deflects many of the charged particles so that they never hit the atmosphere. Some of them are captured in two donut-shaped rings around the Earth, the Van Allen Belts. In addition, the atmosphere also does a good job of protecting us. Primary cosmic rays hitting the atoms in our atmosphere produce a shower of secondary particles with much lower energies so that most of them will not reach the ground.

The average radiation exposure here on Earth is about 3 mSv per year. Depending on your altitude, about 0.39 mSv (13%) is contributed by cosmic radiation. Flying in an airplane at 12 km altitude increases your radiation exposure about 10x compared to sea-level. Things are even worse if your flight route is near the geomagnetic poles where charged particles are concentrated. If you are interested in the radiation dose you will receive on your next flight, check out NASA’s NAIRAS webpage. It provides a global map of radiation dose rates for different altitudes that are produced in near real-time from measurements of solar activity and cosmic rays.

Doses on the ISS, the Moon, and Mars

Comparison of radiation doses. Note logarithmic scale.
Credit: ASA/JPL-Caltech/SwRI, Public Domain

For a crew member of the ISS, the average radiation dose is about 72 mSv for a six-month stay during solar maximum: it is about 50 times higher than on Earth. Because the contribution from solar particles varies with the 11-year solar cycle, one might think that the radiation dose is lower during solar minimum. But in fact, the dose is about twice as high as during solar maximum.

This is because the dominant source of radiation on the ISS is galactic cosmic rays (GCR), which are deflected by the Sun’s magnetic field. GCR intensity is highest during solar minimum when the Sun’s magnetic field is weakest. Even a big solar particle event (SPE) like a solar flare or coronal mass ejection cannot hurt ISS astronauts much but instead sweeps away other charged particles thereby reducing the GCR flux for weeks. This effect is known as the Forbush decrease.

Another big contribution to the radiation exposure aboard the ISS comes from protons trapped in the Van Allen belt. The exposure is largest when the ISS crosses the so-called South Atlantic Anomaly (SAA) where the radiation belt comes closer to Earth’s surface. Most of the time, the ISS orbits comfortably inside the Van Allen belt, but during the 5% of the mission time that the ISS is spends in the SAA, it can be responsible for more than 50% of the total absorbed radiation dose.

The Moon, on the other hand, is exposed to all kinds of cosmic rays since it is located much further away and has neither a magnetic field nor an atmosphere. Here SPEs like the big solar storm that occurred in August 1972, shortly in between the Apollo 16 and Apollo 17 lunar missions, can be life-threatening events for astronauts. Recently, the Lunar Lander Neutrons and Dosimetry (LND) experiment aboard China’s Chang’E 4 lander has measured the radiation exposure on the lunar surface. With about 500 mSv/year the radiation dose is nearly 200 times higher than on Earth.

Plans for manned missions to Mars are already underway and radiation exposure is one of the main concerns. Estimations of the radiation dose for a trip to Mars are based on measurements of the Radiation Assessment Detector (RAD) piggybacking on the Curiosity rover. During a 360-day round trip, an astronaut would receive a dose of about 660 mSv. Assuming that the crew would spend 18 months on the surface while they wait for the planets to realign, they will be exposed to an additional 330 mSv. So the total exposure for an astronaut going to Mars and back will be around 1,000 mSv. This is equal to the exposure limit for NASA astronauts during their entire career, and corresponds to a 5.5% risk of death due to radiation-induced cancer. Therefore, although there is certainly a severe health risk, radiation exposure is not a showstopper for a mission to Mars.

Protecting Space Travelers

The radiation protection plan for the Orion spacecraft includes building a shelter from stowage bags during an SEP event.
Credit: NASA

Astronauts aboard the ISS are partially protected against radiation by the station’s aluminum hull. Materials with low atomic number provide the most effective protection against space radiation including secondary particles like neutrons. Such lightweight materials are also cheaper to bring into space. The ISS has equipped their crew quarters with polyethylene bricks that reduce the radiation dose by 20%.

If we ever construct settlements on the Moon, they will probably be buried beneath lunar soil to be shielded from SPEs and other cosmic rays. On the downside, this would also increase the exposure to secondary neutrons, as measurements of the Apollo 17 Lunar Neutron Probe Experiment have shown.

NASA’s Orion spacecraft is planning to bring people to the Moon in 2024 and eventually to Mars. Since the additional mass for dedicated shielding would simply be too expensive their radiation protection plans include the possibility to build a shelter from onboard supplies including food and water in case of an SPE.

Let’s not forget that space radiation is not only dangerous to humans but also to electronic equipment. High-energy particle interactions can easily damage critical electronic systems on a spacecraft and are causing total or partial mission loss for between one and two satellites per year. Extreme SPEs can even influence equipment on Earth like the 1972 solar storm which is said to have caused the accidental detonation of two dozen US sea mines in Vietnam.

For the future of space travel, more advanced active shielding concepts are the topic of ongoing research. These include powerful magnetic or electrostatic fields and plasma “bubbles” surrounding a spacecraft to deflect charged particles. Of course, radiation protection is only one of the many challenges posed by long-term space travel but let’s hope we will find ways to keep our astronauts safe and healthy.

]]>
https://hackaday.com/2020/12/08/space-is-radioactive-dealing-with-cosmic-rays/feed/ 44 449993 SpaceRadiation
Prism Lighting – The Art of Steering Daylight https://hackaday.com/2020/11/09/prism-lighting-the-art-of-steering-daylight/ https://hackaday.com/2020/11/09/prism-lighting-the-art-of-steering-daylight/#comments Mon, 09 Nov 2020 18:00:23 +0000 https://hackaday.com/?p=442677 The incandescent light bulb was one of the first early applications of electricity, and it’s hard to underestimate its importance. But before the electric light, people didn’t live in darkness …read more]]>

The incandescent light bulb was one of the first early applications of electricity, and it’s hard to underestimate its importance. But before the electric light, people didn’t live in darkness — they thought of ways to redirect sunlight to brighten up interior spaces. This was made possible through the understanding of the basic principles of optics and the work of skilled glassmakers who constructed prism tiles, deck prisms, and vault lights. These century-old techniques are still being applied today for the diffusion of LEDs or for increasing the brightness of LCD displays.

Semantics First!

People in optics are a bit sloppy when it comes to the definition of a prism. While many of them are certainly not geometric prisms, Wikipedia defines it as a transparent optical element with flat, polished surfaces of which at least one is angled. As can be seen in the pictures below some of the prisms here do not even stick to this definition. Browsing the catalog of your favorite optics supplier you will find a large variety of prisms used to reflect, invert, rotate, disperse, steer, and collimate light. It is important to point out that we are not so much interested in dispersive prisms that split a beam of white light into its spectrum of colors, although they make great album covers. The important property of prisms in this article is their ability to redirect light through refraction and reflection.

A Safe Way to Bring Light Under Deck

A collection of deck lights used to direct sunlight below deck in ships. Credit: glassian.org

One of the most important uses of prism lighting was on board ships. Open flames could have disastrous consequences aboard a wooden ship, so deck prisms were installed as a means to direct sunlight into the areas below decks. One of the first patents for deck lights “THE GREAT AND DURABLE INCREASE OF LIGHT BY EXTRAORDINARY GLASSES AND LAMPS” was filed by Edward Wyndus as early as 1684. Deck prisms had typical sizes of 10 to 15 centimeters. The flat top was installed flush with the deck and the sunlight was refracted and directed downward from the prism point. Because of the reversibility of light paths (“If I can see you, you can see me”) deck prisms also helped to spot fires under deck.

Making Shopping Easier for the Ladies

The shopping experience in the 19th century was much improved after the invention of vault lights. Credit: glassian.org

In the 19th century, the idea of prism lighting was adapted to vault lights that directed sunlight into sidewalk vaults and basements. Compared to open grates, these not only provided protection from rain but were also easier to walk on. The latter was apparently considered a great advantage for shop owners helping them to bring women closer to their store windows despite their impractical footwear.

Prism shaped pavement light redirecting sunlight through total internal reflection. Credit: glassian.org

In the beginning, vault covers used round plano-convex lenses or even just flat glasses. The prism shape that was used in ship decks was adapted for vault lights only later by Edward Hayward in 1871. His glass prisms not only let light pass through but also redirected it sideways into the room. Hayward’s prisms were based on total internal reflection which occurs when light tries to exit the glass above a critical angle.

From Prism Tiles to Gas Cylinders

Eventually, prism glass was also applied to vertical windows with much commercial success. The biggest player in the game was the Luxfer Prism Company which started to sell prism tiles in 1897 based on an earlier patent by James G. Pennycuick. The 4-inch wide square Luxfer tiles were typically installed above storefront windows.

Nice tiles on a storefront. Credit: Luxfer Prism Glass Tile Collector.

The inner surface of the tile was covered with horizontal prisms that redirect light deeper into the room than the sunlight would otherwise reach on its own.

Although their commercial success was brought to a halt with the availability of cheap electric lighting, the tiles can still be seen in many small towns in the US. The Luxfer company survived by changing its business to metal products and is now the world’s biggest manufacturer of high-pressure aluminum cylinders for gas storage. Luxfer tiles that often contain ornamental patterns designed by lead architects of the time are now a collector’s item. Frank Lloyd Wright was a high-profile booster.

LED Diffusers and LCD Screens

Although nowadays electricity is cheap and LEDs are super-efficient compared to incandescent lamps, the more efficient use of sunlight for interior lighting is undoubtedly a worthy goal, if not just for the superior color rendition. The modern version of prism tiles are daylight redirecting window films. The thin plastic films include a microstructured saw-tooth pattern that refracts light upwards to the ceiling from where it is reflected and reaches deeper into the building — the thin plastic equivalent of Luxfer tiles. According to 3M the film can save up to 52% on lighting energy costs.

Typical configuration of an LCD screen and working principle of the prism film: ray I is totally reflected and recycled by diffuse reflection, ray II is converged by the prism refraction, and ray III is recycled by other prisms. Credit: Zong Qin

You might already be familiar with prism films in case you were ever looking for ways to diffuse LEDs. Micro prism sheets made from polystyrene or polycarbonate are a common solution to homogenize the light of an LED array. In addition, if you ever tore down an LCD screen you will have noticed several plastic sheets sandwiched underneath the glass. These also contain prism films to enhance the brightness of the backlight screen.

As shown in the picture the prism film will converge light towards the viewer thereby increasing the on-axis brightness while at the same time limiting the viewing angle. Often two of these films are stacked with 90 degrees rotation to converge light in both the horizontal and vertical planes.

These films are also starting to show up in high-end LED lighting applications, and it’s probably only a matter of time, and price, before they become ubiquitous. The incandescent bulb killed the prism, the LED is killing the incandescent, and the prism is getting rediscovered. What’s old is new again!

Prisms are somehow the swiss army knife of optics, a multipurpose tool when it comes to steering light. Even if manufacturing techniques, materials, shapes, and dimensions have changed over the centuries the ability to redirect light through simply shaped transparent bodies still finds a lot of useful applications.

]]>
https://hackaday.com/2020/11/09/prism-lighting-the-art-of-steering-daylight/feed/ 33 442677 Prism