Pedro Umbelino – Hackaday https://hackaday.com Fresh hacks every day Fri, 18 Sep 2020 18:49:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 156670177 Decoding the Netflix Announcement: Explaining Optimized Shot-Based Encoding for 4K https://hackaday.com/2020/09/16/decoding-the-netflix-announcement-explaining-optimized-shot-based-encoding-for-4k/ https://hackaday.com/2020/09/16/decoding-the-netflix-announcement-explaining-optimized-shot-based-encoding-for-4k/#comments Wed, 16 Sep 2020 14:00:54 +0000 https://hackaday.com/?p=430925 Netflix has recently announced that they now stream optimized shot-based encoding content for 4K. When I read that news title I though to myself: “Well, that’s great! Sounds good but… …read more]]>

Netflix has recently announced that they now stream optimized shot-based encoding content for 4K. When I read that news title I though to myself: “Well, that’s great! Sounds good but… what exactly does that mean? And what’s shot-based encoding anyway?”

These questions were basically how I ended up in the rabbit hole of the permanent encoding optimization history, in an effort to thoroughly dissect the above sentences and properly understand it, so I can share it with you. Before I get into it, lets take a trip down memory lane.

The 90’s

CSHOW on DOSBox

In the beginning of the nineties if you wanted to display an image file on your computer, like a GIF or the new JPG format, you would need to use a program for that. I think I used CSHOW back then. For ‘videos’ me and my friends exchanged FLIC files, which were a sequence of still frames which would be flipped through rapidly to achieve the illusion of movement in a software I can’t recall anymore. Despite some video compression standards were already developed in the early 90s, video files were not so common. Data storage was scarse and expensive and not even my 486 DX 33Mhz, 8 MB Ram and 120 MB HDD desktop could probably handle in any way (speed, ram or disk) the decoding of a modern video file.

I don’t even imagine how long it would take to transfer any meaningful video with my 28.8 kb/s modem.

JPEG compression was a game changer for images. We could now store huge images with pretty much the same quality in much less space. When it appeared, it was almost magical to me. We were constantly searching for the best compression algorithms and tricks we could use in order to spend less on floppy disks. There was ARC, ARJ, LHA, PAK, RAR, ZIP, just to name a few I used in MS-DOS. Those were all archivers with lossless compression and most would fall short on high definition images. But JPEG uses lossy compression and so it could get much smaller images at the cost of image quality.

For videos, being a sequence of images, the logical next step was Motion JPEG. M-JPEG is a video compression format in which each video frame is compressed separately as a JPEG image so that meant that you would get the benefits of JPEG compression to applied to each frame. Like FLIC, but for JPEGs.

Since the compression depends only on each individual frame, this is called intraframe compression. As a purely intraframe compression scheme, the image quality of M-JPEG is directly a function of each video frame’s static spatial complexity. But a video contains more information than the sum of each frames. The evolving, the transitioning from one frame to another is also information and the new algorithms soon took advantage of this.

From Stills to Motion

MPEG-1 was one of the encoders that explored new ways to compress video. Instead of a each individual frame being compressed, MPEG-1 split the video into different frame types. For simplicity, let’s say there are two major frame types in MPEG-1, the key frames (I-frames) and the prediction frames (P-frames, B-frames). MPEG-1 stores one key frame, which is a regular full frame in compressed format, and then a series of prediction frames, maybe ten or fifteen. The prediction frames are not images but rather the difference between the frame and the last key frame, hence saving a lot of space. As for storing audio, MPEG-1 Audio (MPEG-1/2 Audio Layer 3) uses psychoacoustics to significantly reduce the data rate required by an audio stream as it removes parts of the audio that the human ear would not hear. Most of us know this audio format simply by MP3.

O que é MP3? - converterix - YouTube MP3 ConverterAs cool and inventive as it sounds, even before the first draft of the MPEG-1 standard had been finished, work on MPEG-2 was already under way. MPEG-2 came with interlaced video, a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth, and sound improvements. The MPEG-2 standard could compress video streams to as much as 1/30th of the original video size while still maintain decent picture quality.

MPEG-3 was integrated into MPEG-2 when found to be redundant. But then came the more modern MPEG-4. MPEG-4 provides a framework for more advanced compression algorithms potentially resulting in higher compression ratios compared to MPEG-2 at the cost of higher computational requirements. After being released, there was a time when a lot of different codecs coexisted, and it was sometimes frustrating for the regular user to try to play a video file. I remember not having the DivX codec, or Xvid, or 3ivx, or having to install libavcodec and ffmpeg, or maybe trying to play it in Quicktime player, or just giving up in tears…

But one thing was certain, we now had very decent quality in our PCs, Playstations, and digital cameras.

Streaming

Baixar Napster - Microsoft Store pt-BR

Practical video streaming was only made possible with these and other advances in data compression since it is still not practical to deliver the required amount of data in an uncompressed way. Streaming has it’s origins in streaming music, sharing music and ultimately developing P2P networks, and there are tons of interesting stories to be told. Nevertheless I’m going to focus on video streaming in the article since the whole goal was to understand what is “optimized shot-based encoding”.

We currently live in the middle of a Streaming War, with fierce competition between video streaming services such as Netflix, Amazon Prime Video, Disney Plus, Hulu, HBO Max, Apple TV+, Youtube Premium, CBS All Access, etc.

All those services try to deliver new and exclusive content and deliver it well. Delivering well involves many aspects, from content freshness to user experience, but one thing I think we can agree upon: when the video quality sucks, no marketing or UX team can save the day. With this in mind, several modern algorithms for video encoding are used, including H.264 (a.k.a. MPEG-4 Part 10), HEVC, VP8 or VP9. Besides those, each service try to enhance their own video quality as they can.

Nexflix Tech Blog

Netflix has a nice tech blog where you can read some serious geeky video content. They introduced many different optimizations into their network and their compression algorithms which sometimes they share how in the blog. In the end of 2015 they announced a technique called Per-Title Encoding Optimization.

When we first deployed our H.264/AVC encodes in late 2010, our video engineers developed encoding recipes that worked best across our video catalogue (at that time). They tested various codec configurations and performed side-by-side visual tests to settle on codec parameters that produced the best quality trade-offs across different types of content. A set of bitrate-resolution pairs (referred to as a bitrate ladder) … were selected such that the bitrates were sufficient to encode the stream at that resolution without significant encoding artifacts.

At that time, Netflix was using PSNR (Peak Signal-To-Noise Ratio) in dB as a measure of picture quality. They figured out that this fixed bitrate ladder is more an average or rule of thumb for most content, but there are several cases where is doesn’t apply. Scenes with high camera noise or film grain noise, a 5800 kbps stream would still exhibit blockiness in the noisy areas. On the other end, for simple content like cartoons, 5800 kbps is an overkill to produce 1080p video. So they tried different parameters for each title:

Image for post

Each line here represents a title, a movie or an episode. Higher PSNR means better overall image quality: 45 dB is very good quality, 35 dB will show encoding artifacts. It’s clear that many titles don’t actually gain a lot from increasing the bitrate beyond a certain point. In general, per-title encoding will often give you better video quality, a either higher resolution for the same bitrate or same resolution for less bitrate.

Title contents can be very different in nature. But even within the same title, there can also be  high action scenes and later a still landscape. That’s why that when Netflix decided to chunk-encode their titles to take advantage of the cloud, by splitting the movie into chucks so it could be encoded in parallel, they started to test and implement Per-Chunk Encoding Optimization in 2016. It’s the same logic as per title, but for each chunk in the title. It’s like adding one more level of optimization.

Finally, in 2018, Netflix started to implemented Optimized shot-based encoding, and its now available also in 4K titles since August. So back to the original question. What if, instead of somehow random chunks to encode our video, which result in random Key-frames being generated (of them some might be very similar which is not optimal since they take a lot of space), one could choose the right and optimal Key-frames for each title?

In an ideal world, one would like to chunk a video and impose different sets of parameters to each chunk, in a way to optimize the final assembled video. The first step in achieving this perfect bit allocation is to split video in its natural atoms, consisting of frames that are very similar to each other and thus behave similarly to changes to encoding parameters — these are the “shots” that make up a long video sequence. (…) The natural boundaries of shots are established by relatively simple algorithms, called shot-change detection algorithms, which check the amount of differences between pixels that belong to consecutive frames, as well as other statistics.

Besides optimizing the Key-frames for encoding, shot-based encoding has some other advantages, such as seeking in a video sequence leads to natural points of interest (signaled by shot boundaries) and encoding parameter change in different shots is unnoticeable for the user since the Key-frame is very different. All of these changes are tweaks for the encoders that can actually be H.264 or HEVC or VP8, it makes no difference.

Is it worth all the trouble? To answer let’s look at the most recent video quality rating that Nexflix uses: the Video Multi-method Assessment Fusion, or VMAF. Traditionally, codec comparisons share the same methodology: PSNR values are calculated for a number of video sequences, each encoded at predefined resolutions and fixed quantization settings according to a set of test conditions.  This work well for small differences in codecs, or for evaluating tools within the same codec. For video streaming, according to Netflix, the use of PSNR is ill-suited, since it correlates poorly with perceptual quality. VMAF can capture larger differences between codecs, as well as scaling artifacts, in a way that’s better correlated with perceptual quality by humans. Other players in the industry recognize and are adopting VMAF as a quality rating tool.

Using VMAF Netflix started to run test on their 4K titles. The results were quite impressive.

Image for post
Example of a thriller-drama episode showing new highest bitrate of 11.8 Mbps VS 16 Mbps
Image for post
Example of a 4K animation episode showing new highest bitrate of 1.8 Mbps VS 11.5 Mbps

Above are just two examples, but on average they need a 50% lower bitrate to achieve the same quality with the optimized ladder. The highest 4K bitrate title on average is 8 Mbps which is also a 50% reduction compared to 16 Mbps of the fixed-bitrate ladder. Overall users get more quality for lower bitrate, which is quite an achievement.

And since a picture is worth a thousand words, I’ll finish with this (notice the difference and bitrate):

Image for post

 

Disclaimer

The author is not trying to start a codec flame war, there are many codecs and standards, and MPEG seemed proper for demonstration purposes. The shear amount of encodings, specifications and their aliases pretty much guarantees mistakes somewhere in the text, feel free to correct in the comments.

The author also does not endorse any streaming services. We mentioned Netflix just because it shares its encoding tricks with the public. We’re sure the other services have clever stuff going on as well, they’re just not telling.

 

]]>
https://hackaday.com/2020/09/16/decoding-the-netflix-announcement-explaining-optimized-shot-based-encoding-for-4k/feed/ 33 430925 a1 O que é MP3? - converterix - YouTube MP3 Converter Baixar Napster - Microsoft Store pt-BR Image for post Example of a thriller-drama episode showing new highest bitrate of 11.8 Mbps Example of a 4K animation episode showing new highest bitrate of 1.8 Mbps Image for post
Separation Between WiFi and Bluetooth Broken by the Spectra Co-Existence Attack https://hackaday.com/2020/08/07/separation-between-wifi-and-bluetooth-broken-by-the-spectra-co-existence-attack/ https://hackaday.com/2020/08/07/separation-between-wifi-and-bluetooth-broken-by-the-spectra-co-existence-attack/#comments Fri, 07 Aug 2020 17:01:45 +0000 https://hackaday.com/?p=424953 This year, at DEF CON 28 DEF CON Safe Mode, security researchers [Jiska Classen] and [Francesco Gringoli] gave a talk about inter-chip privilege escalation using wireless coexistence mechanisms. The title …read more]]>

This year, at DEF CON 28 DEF CON Safe Mode, security researchers [Jiska Classen] and [Francesco Gringoli] gave a talk about inter-chip privilege escalation using wireless coexistence mechanisms. The title is catchy, sure, but what exactly is this about?

To understand this security flaw, or group of security flaws, we first need to know what wireless coexistence mechanisms are. Modern devices can support cellular and non-cellular wireless communications standards at the same time (LTE, WiFi, Bluetooth). Given the desired miniaturization of our devices, the different subsystems that support these communication technologies must reside in very close physical proximity within the device (in-device coexistence). The resulting high level of reciprocal leakage can at times cause considerable interference.

There are several scenarios where interference can occur, the main ones are:

  • Two radio systems occupy neighboring frequencies and carrier leakage occurs
  • The harmonics of one transmitter fall on frequencies used by another system
  • Two radio systems share the same frequencies

To tackle these kind of problems, manufacturers had to implement strategies so that the devices wireless chips can coexist (sometimes even sharing the same antenna) and reduce interference to a minimum. They are called coexistence mechanisms and enable high-performance communication on intersecting frequency bands and thus, they are essential to any modern mobile device. Despite open solutions exist, such as the Mobile Wireless Standards, the manufacturers usually implement proprietary solutions.

Spectra

Spectra is a new attack class demonstrated in this DEF CON talk, which is focused on Broadcom and Cypress WiFi/Bluetooth combo chips. On a combo chip, WiFi and Bluetooth run on separate processing cores and coexistence information is directly exchanged between cores using the Serial Enhanced Coexistence Interface (SECI) and does not go through the underlying operating system.

Spectra class attacks exploit flaws in the interfaces between wireless cores in which one core can achieve denial of service (DoS), information disclosure and even code execution on another core. The reasoning here is, from an attacker perspective, to leverage a Bluetooth subsystem remote code execution (RCE) to perform WiFi RCE and maybe even LTE RCE. Keep in mind that this remote code execution is happening in these CPU core subsystems, and so can be completely invisible to the main device CPU and OS.

Join me below where the talk is embedded and where I will also dig into the denial of service, information disclosure, and code execution topics of the Spectra attack.

Denial of Service

This happens when one wireless core denies transmission to the other core. DoS attacks are possible if one core is able to claim spectrum resources of the other core. As this is the basic working principle of any coexistence interface, all of them are vulnerable by definition, as long as one core keeps constantly claiming the resource for himself. Other DoS opportunities arise from one wireless core being able to crash another via shared RAM abuse.

Information Disclosure

One wireless core can infer data or actions of the other core. One example is when connecting an HID device like a keyboard. Timings and contents of keypresses can be observed on the host that receives those keystrokes. However, an attacker who has only code execution on a WiFi chip should not be able to make such observations. While the content of the keypresses are missing, it is possible for code running on the WiFi chip to infer timing statistics about the keys pressed in the Bluetooth side. This becomes interesting for inferring passwords and password lengths.

Code Execution

One wireless core can execute code within the other core. The security researchers demonstrate that it is possible to execute an arbitrary WiFi address with controlled contents via Bluetooth. This happens because when both cores are running, they share a RAM region which contain, among other information, a large function table. By overwriting a specific address, it is possible to control the WiFi core program counter. This means that a Bluetooth subsystem exploit can turn into a WiFi exploit. In addition, writing to the WiFi buffer and executing addresses produces various kernel panics on Android and iOS, indicating that further escalations into the host are possible and probably it’s just a matter of time until someone pops calc.

Conclusion

Although the research was centered on Broadcom and Cypress combo chips (which cover, by the way, all iPhones, MacBooks, iMacs, older Apple Watches, Samsung S and Note series, some Google Nexus, Raspberry Pi and so forth…) advisories were sent to Intel, MediaTek, Qualcomm, Texas Instruments, Marvell, NXP and they all mention similar coexistence interfaces in their devices. So, mutatis mutandis, and because of its very nature, some Spectra class vulnerabilities probably exist in other vendors too.

And since [Jiska] has a history breaking wireless stuff, we can probably expect a follow-up on this research and this class of inter-chip privilege escalation vulnerabilities. Can’t wait!

]]>
https://hackaday.com/2020/08/07/separation-between-wifi-and-bluetooth-broken-by-the-spectra-co-existence-attack/feed/ 5 424953 spectra
A Hydrogen Fuel Cell Drone https://hackaday.com/2019/05/06/a-hydrogen-fuel-cell-drone/ https://hackaday.com/2019/05/06/a-hydrogen-fuel-cell-drone/#comments Mon, 06 May 2019 08:00:22 +0000 https://hackaday.com/?p=357143 When we think about hydrogen and flying machines, it’s quite common to imagine Zeppelins, weather balloons and similar uses of hydrogen in lighter-than-air craft to lift stuff of the ground. …read more]]>

When we think about hydrogen and flying machines, it’s quite common to imagine Zeppelins, weather balloons and similar uses of hydrogen in lighter-than-air craft to lift stuff of the ground. But with smaller and more efficient fuel cells, hydrogen is gaining its place in the drone field. Project RACHEL is a hydrogen powered drone project that involves multiple companies and has now surpassed the 60 minutes of flight milestone.

The initial target of the project was to achieve 60 minutes of continuous flight while carrying a 5 kg payload. The Lithium Polymer battery-powered UAVs flown by BATCAM allow around 12 minutes of useable flight. The recent test of the purpose-built fuel cell powered UAV saw it fly for an uninterrupted 70 minutes carrying a 5 kg payload.  This was achieved on a UAV with below 20 kg maximum take-off mass, using a 6-litre cylinder containing hydrogen gas compressed to 300 bar.

While this is not world record for drones and it’s not exactly clear if there will be a commercial product nor the price tag, it is still an impressive feat for a fuel cell powered flying device. You can watch the footage of one of their tests bellow:

Drones are definitely getting more diverse and innovative as they play a more relevant role in our everyday life.

As a side note, It would be interesting to know how much hydrogen is there in a 6-litre / 300bar deposit and how much weight you could lift with that…

Thanks for the tip [Itay]

]]>
https://hackaday.com/2019/05/06/a-hydrogen-fuel-cell-drone/feed/ 37 357143 hydrogen-drone-featured
Faxsploit – Exploiting A Fax With A Picture https://hackaday.com/2019/05/04/faxsploit-exploiting-a-fax-with-a-picture/ https://hackaday.com/2019/05/04/faxsploit-exploiting-a-fax-with-a-picture/#comments Sun, 05 May 2019 02:00:41 +0000 https://hackaday.com/?p=357030 Security researchers have found a way to remotely execute code on a fax machine by sending a specially crafted document to it. So… who cares about fax? Well apparently a …read more]]>

Security researchers have found a way to remotely execute code on a fax machine by sending a specially crafted document to it. So… who cares about fax? Well apparently a lot of persons are still using it in many institutions, governments and industries, including the healthcare industry, legal, banking and commercial. Bureaucracy and old procedures tend to die hard.

This is one of those exploits that deserve proper attention, for many reasons. It is well documented and is a great piece of proper old school hacking and reverse engineering. [Eyal Itkin], [Yannay Livneh] and [Yaniv Balmas] show us their process in a nicely done article that you can read here. If you are into security hacks, it’s really worth reading and also worth watching the DEFCON video. They focused their attention in a all-in-one printer/scanner/fax and the results were as good as it gets.

Our research set out to ask what would happen if an attacker, with merely a phone line at his disposal and equipped with nothing more than his target`s fax number, was able to attack an all-in-one printer by sending a malicious fax to it.

In fact, we found several critical vulnerabilities in all-in-one printers which allowed us to ‘faxploit’ the all-in-one printer and take complete control over it by sending a maliciously crafted fax.

As the researchers note, once an all-in-one printer has been compromised, it could be used to a wide array of malicious activity, from infiltrating the internal network, to stealing printed documents even to mining Bitcoin. In theory they could even produce a fax worm, replicating via the phone line.

The attack summary video is bellow, demonstrating an exploit that allows an attacker to pivot into an internal network and taking over a Windows machine using Eternal Blue NSA exploit.

Just to show how legacy a tech fax is, did you know that the first experimental fax machine dates back to 1843? Yep, you read that right, even before the first phone line.

]]>
https://hackaday.com/2019/05/04/faxsploit-exploiting-a-fax-with-a-picture/feed/ 26 357030 faxsploit-featuredl
How-To: Mapping Server Hits with ESP8266 and WS2812 https://hackaday.com/2019/05/02/how-to-mapping-server-hits-with-esp8266-and-ws2812/ https://hackaday.com/2019/05/02/how-to-mapping-server-hits-with-esp8266-and-ws2812/#comments Thu, 02 May 2019 17:01:25 +0000 https://hackaday.com/?p=355901 It has never been easier to build displays for custom data visualization than it is right now. I just finished one for my office — as a security researcher I …read more]]>

It has never been easier to build displays for custom data visualization than it is right now. I just finished one for my office — as a security researcher I wanted a physical map that will show me from where on the planet my server is being attacked. But the same fabrication techniques, hardware, and network resources can be put to work for just about any other purpose. If you’re new to hardware, this is an easy to follow guide. If you’re new to server-side code, maybe you’ll find it equally interesting.

I used an ESP8266 module with a small 128×32 pixel OLED display connected via an SSD1306 controller. The map itself doesn’t have to be very accurate, roughly knowing the country would suffice, as it was more a decorative piece than a functional one. It’s a good excuse to put the 5 meter WS2812B LED strip I had on the shelf to use.

The project itself can be roughly divided into 3 parts:

  1. Physical and hardware build
  2. ESP8266 firmware
  3. Server-side code

It’s a relatively simple build that one can do over a weekend. It mashes together LED strips, ESP8266 wifi, OLED displays, server-side code, python, geoip location, scapy, and so on… you know, fun stuff.

The Hardware

First of all, you need a map. Obvious, right? But your map type choosing can be an important part of the project and can greatly simplify or complicate the whole setup. The issue here is the map projection type. There are many map projection types and some are more straightforward to use on this project than others. The ideal one is the equirectangular projection where latitude and longitude can be mapped in a linear fashion. I used the timezone map from the CIA because, well, it’s kind of cool. But it’s not ideal, since it seems to be a Mercator projection variation (without the poles) hence the latitude increments are not the same everywhere as the map area gets exaggerated near the poles.

I used two standard A4 sheets for a 50cm by 24cm map. The LEDs go behind the sheets, so the thickness of the paper is important if you want the lights to really stand out. I had 5 meters of LED strip with a spacing around 16mm that I cut into 10 individual strips. I positioned each strip to match the LED spacing so I have a perfect grid. This covers the full longitude range and latitudes between 65 and -35. It’s not perfect but likely covers around 95% of the internet connected world. It’s a compromise.

I choose to power the LED strip straight from the ESP8266 5V pin of the board, powered from the USB cable. It’s 300 LEDs in total and I was pretty sure that, even without doing the math, it would definitely not work. When I was prototyping, I tried that anyway and to my surprise, more than half the LEDs still lit. I figured that if I need all LEDs on than I had bigger problems to deal with because then my server would be under heavy attack! At least, that’s what I told myself as an excuse for being too lazy to add a decent power supply (and more wires). If you’re making a display that uses many of the pixels at one time, you need to use an external power supply.

In the end, I cut a small rectangular hole in the map so I can see the OLED screen with the last 4 IPs that were recently active. It kills a bit of the retro style but curiosity took the best of me. This was after all was done, so the hole looks really crappy.

Tip: LED strips are great, with only 3 wires, VCC, GND and DATA and each pixel is individually addressed. The DATA line is also directional, if you are cutting and soldering them to make a board, be carefull not to change the direction of the DATA line. Doing so will cause that strip, and all subsequent ones, to stop working. Just saying, it’s not that I was distracted and did that in the first try… (I was, I did…)

The Firmware

The ESP8266 programming was not very complicated. The setup initializes the WiFi, the OLED and the LED strips. The main loop pools the server periodically for latest IPs and geolocation and then translates that data into a particular LED with a color that relates to the time elapsed since that IP address was seen by the server, like a heat map by time. Starts red, fades to orange, yellow, green and blue until it disappears after 10 minutes.

The LED strip library used was NeoPixelBus and it’s worth mentioning that there are some limitations for the ESP8266. The ESP8266 requires hardware support to be able to send the data to the LED strip. Due to this and the restrictions on which pins are used by each hardware peripheral, our only options are GPIO1, GPIO2, or GPIO3. The most reliable seems to be GPIO3 for DMA supported operation, unfortunately this is also RDX0. The normal feature of this pin is the serial receive hence using this DMA method will not allow you to receive serial data. For this project that’s not an issue but it’s worth noting.

The OLED library used was Adafruit_SSD1306. This ESP8266 module documentation is not great and it took some time to figure out how to make it work. The ‘secret’, besides mapping the proper SDA, SCL and reset lines, is to call the Wire.begin() function in the setup before initializing the Adafruit library.


#define OLED_RESET 4
#define OLED_SDA 2
#define OLED_SCL 14

void setup()
{
//OVERRIDE Wire.begin from Adafruit_SSD1306 to make this work on ESP8266 TTGO OLED
Wire.begin( OLED_SDA, OLED_SCL);
...

The Server-Side

The server-side code was made in python, using python-geoip geolite2 and scapy libraries. Scapy is a really great tool and library. If you’re into network testing, in particular pentesting, you probably know it. If you are not, Scapy is a packet manipulation tool written in Python by Philippe Biondi. It can forge or decode packets, send them on the wire, capture them, and match requests and replies. It can also handle tasks like scanning, tracerouting, probing, unit tests, attacks, and network discovery. Netcat has been dubbed the “TCP/IP swiss army knife”, Scapy is like that but for packets.

To start sniffing traffic one can do:


def custom_action(packet):
 srcip = packet[0][1].src
 gps = geolite2.lookup(srcip) ...

sniff(filter="ip", prn=custom_action, count=128)

This tells Scapy to sniff for IP traffic and, for each packet, run the custom action. In this example it will use the geolite2 library to lookup the IP. The python-geoip-geolite2 package includes GeoLite2 data created by MaxMind and its use is pretty straightforward. Since the GeoLite2 database is local, there is no need for the server to make outside requests to identify the IP location. (it’s also not very accurate, better options do exist). Filtering IP traffic is not enough to identify attacks, legitimate traffic also appears. The server is private, so let’s just say a big part of it is “unwanted” traffic. If we wanted to take it further, we could connect the map to our favorite NIDS. But then again, this was a weekend project…

I also used the SimpleHTTPServer package to set up a webserver on a custom port so that the ESP8266 can connect to it and fetch the latest results, since they are regularly written to a file. To use a different port other than 80, one should use a handler, like so:


Handler = SimpleHTTPServer.SimpleHTTPRequestHandler
httpd = SocketServer.TCPServer(("", PORT), Handler)
httpd.serve_forever()

This will create an HTTP open port at PORT and serve whatever files are in the current directory where the python code is called. This is, of course, not very safe from a security perspective.

The Results

In the end I was happy with the results. I took a lot of shortcuts and simplifications but hey, it’s a hack. There are many things to improve if I ever get to do version 2, which is clearly on my mind and involves an entire wall. Now, you can say that for the accuracy and resolution, an LCD screen is clearly much better. And it’s true. But maybe it’s not as fun. I usually ask myself how can I do XYZ project, and not why. That’s how I end up with the workshop full of stuff and a bunch of, let’s be honest, pretty much useless projects.

My take away is the enjoyment I get from making them and what I learn along the way. And there’s a blinking lights bonus in this one. Here’s a time-lapse of around 6 mins (1 sec per frame):

I fondly call China “the permanent light show”. The full source code for both firmware and server side can be found at GitHub, just in case you want to try it yourself and improve on it.

If this project isn’t your thing, don’t worry, we covered a lot of different and interesting other led strip projects in the past too.

]]>
https://hackaday.com/2019/05/02/how-to-mapping-server-hits-with-esp8266-and-ws2812/feed/ 17 355901 sitrep-map-esp8266-ws2812-featured
Stealing DNA By Phone https://hackaday.com/2019/04/24/stealing-dna-by-phone/ https://hackaday.com/2019/04/24/stealing-dna-by-phone/#comments Wed, 24 Apr 2019 11:00:00 +0000 https://hackaday.com/?p=355380 Data exfiltration via side channel attacks can be a fascinating topic. It is easy to forget that there are so many different ways that electronic devices affect the physical world …read more]]>

Data exfiltration via side channel attacks can be a fascinating topic. It is easy to forget that there are so many different ways that electronic devices affect the physical world other than their intended purpose. And creative security researchers like to play around with these side-effects for ‘fun and profit’.

Engineers at the University of California have devised a way to analyse exactly what a DNA synthesizer is doing by recording the sound that the machine makes with a relatively low-budget microphone, such as the one on a smart phone. The recorded sound is then processed using algorithms trained to discern the different noises that a particular machine makes and translates the audio into the combination of DNA building blocks the synthesizer is generating.

Although they focused on a particular brand of DNA Synthesizers, in which the acoustics allowed them to spy on the building process, others might be vulnerable also.

In the case of the DNA synthesizer, acoustics revealed everything. Noises made by the machine differed depending on which DNA building block—the nucleotides Adenine (A), Guanine (G), Cytosine (C), or Thymine (T)—it was synthesizing. That made it easy for algorithms trained on that machine’s sound signatures to identify which nucleotides were being printed and in what order.

Acoustic snooping is not something new, several interesting techniques have been shown in the past that raise, arguably, more serious security concerns. Back in 2004, a neural network was used to analyse the sound produced by computer keyboards and keypads used on telephones and automated teller machines (ATMs) to recognize the keys being pressed.

You don’t have to rush and sound proof your DIY DNA Synthesizer room just yet as there are probably more practical ways to steal the genome of your alien-cat hybrid, but for multi-million dollar biotech companies with a equally well funded adversaries and a healthy paranoia about industrial espionage, this is an ear-opener.

We written about other data exfiltration methods and side channels and this one, realistic scenario or not, it’s another cool audio snooping proof of concept.

]]>
https://hackaday.com/2019/04/24/stealing-dna-by-phone/feed/ 8 355380 800
Self-aware Robotic Arm https://hackaday.com/2019/04/24/self-aware-robotic-arm/ https://hackaday.com/2019/04/24/self-aware-robotic-arm/#comments Wed, 24 Apr 2019 08:00:00 +0000 https://hackaday.com/?p=355464 If you ever tried to program a robotic arm or almost any robotic mechanism that has more than 3 degrees of freedom, you know that a big part of the …read more]]>

If you ever tried to program a robotic arm or almost any robotic mechanism that has more than 3 degrees of freedom, you know that a big part of the programming goes to the programming of the movements themselves. What if you built a robot, regardless of how you connect the motors and joints and, with no knowledge of itself, the robot becomes aware of the way it is physically built?

That is what Columbia Engineering researchers have made by creating a robot arm that learns how it is connected, with zero prior knowledge of physics, geometry, or motor dynamics. At first, the robot has no idea what its shape is, how its motors work and how they affect its movement. After one day of trying out its own outputs in a pretty much random fashion and getting feedback of its actions, the robot creates an accurate internal self-simulation of itself using deep-learning techniques.

The robotic arm used in this study by Lipson and his PhD student Robert Kwiatkowski is a four-degree-of-freedom articulated robotic arm. The first self-models were inaccurate as the robot did not know how its joints were connected. After about 35 hours of training, the self-model became consistent with the physical robot to within four centimeters. The self-model then performed a pick-and-place task that enabled the robot to recalibrate its original position between each step along the trajectory based entirely on the internal self-model.

To test whether the self-model could detect damage to itself, the researchers 3D-printed a deformed part to simulate damage and the robot was able to detect the change and re-train its self-model. The new self-model enabled the robot to resume its pick-and-place tasks with little loss of performance.

Since the internal representation is not static, not only this helps the robot to improve its performance over time but also allows it to adapt to damage and changes in its own structure. This could help robots to continue to function more reliably when there its part start to wear off or, for example, when replacement parts are not exactly the same format or shape.

Of course, it will be long before this arm can get a precision anywhere near Dexter, the 2018 Hackaday Prize winner, but it is still pretty cool to see the video of this research:

]]>
https://hackaday.com/2019/04/24/self-aware-robotic-arm/feed/ 16 355464 800