Keeping an Application Alive

I recently was working on a project in which, among other things, there needed to be a process that would restart the application should it ever be terminated for no reason. This isn’t unusual for kiosk applications where the only UI that the user will be able to get to is a single application running on the system. This isn’t an unusual need and something for which we had a utility that could do this; we generally refer to these as “Kiosk Application Monitors.”  But I decided not to use it. Why not? Because when developing it is a pain to deal with an app that isn’t easily killed. If I manually terminated the application it would almost immediately restart; this was a behaviour that was by design. When I really wanted the application to terminate and stay that way things were more challenging. This had to be done from the task manager and the task manager wasn’t easily accessible since the KIOSK app ran as always on top. To see the task manager I would first have to kill the app. But if the app were killed it would just restart.

Not wanting to deal with the difficulty of this I made a new a Kiosk Application Monitor (hereon referred to as KAM). While I would generally prefer to make utilities in C# I used C for this; I was going to be using Win32 APIs and it is easier to have direct access to them instead of  writing P/Invoking declarations for them.

The key difference in this KAM and the other ones that we had used was that this KAM could be terminated through keystrokes. It adds a keyboard hook that receives every keystroke that a user makes no matter what application.

Termination on Detecting Safe Word

I’ve got a variable named SafeWord that contains the word that when typed will kill all of the child processes and shut down the KAM. To better insulate the app from an accidental activation I’ve mandated that the escape key be pressed before the safe word and the enter key be pressed immediately after the safe word.  The keyboard hook receives the virtual key codes for the keys pressed. The code for escape (0x1b) and enter (0x0d) are both used directly within the source code.   Any other key press will be will be ignored until the escape key has been pressed, and will be checked for a match to the safe word. At the first mismatch the procedure will stop comparing until the escape key is pressed again. When the enter key is pressed the routine checks if the end of the safe word has been reached. If it has then the user has typed the same word; the routine to terminate the child processes is called.

keyboardHook

They keyboard routine is a small part in what this does, but a real time saver for me.

Allowing Only One Instance

I didn’t want more than one instance of this program ever running on the same machine. The way I managed that is pretty usual for Win32 programs; I used a named event to ensure there is a single instance. Named events are shared across programs. After creating one (with CreateEvent ) calling the GetLastError() function afterwords indicates whether or not this is the only instance of an application that has access to the event or if another program already has an instance of an event by the same name. When another instance has already created one GetLastError returns ERROR_ALREADY_EXISTS.

SingleInstance

Process Description and Start-Up

Moving from the ancillary functionality to the core functionality, I decided to use JSON for specifying information on the processes that should be started and watched. The C++ libraries do no intrinsically support JSON data handling. I used a third party library for this. The data in the JSON is used to populate a structure that gives very basic information on the process to be monitored.

Structure

Most of these values are passed directly to the Win32 function CreateFunction. Ignore is there so that I could disable a process without completely removing it from my configuration file. For each process that I’m going to start I create a new thread to monitor it.

CreateThreads

Most of the code within the CreateWatchedProcess function will run within an infinite loop. The process is created and information about it is populated into a PROCESS_INFORMATION variable (which I have named pi). The value of interest for the KAM is pi.hProcess. This is the process handle. The wait functions in the Win32 API can accept handles processes. In the case of WaitForSingleObject passing a process handle will block the calling thread until the process terminates.  There is nothing that my program has to do for detecting the termination of the process. When the next line after WaitForSingleObject executes we know the process has been terminated. The only question is if it was terminated because there was a request for the KAM to shut down or if this were an unexpected .

processLoop

How Did it Perform

In testing things worked fine. I’d intentionally put a bug in the program of interest that would cause it to crash and the KAM restarted it. The app I was using it with also remembered it’s state and could restore the UI to what it was before. From the perspective of the user the screen flashed but was otherwise normal. When I tried the same utility in the production environment I’m happy to say that its full functionality was not exercised; the program it was monitoring never crashed.

I’ve found the program to be useful and see some opportunities to increase it’s utility. I plan to make updates to it to support things such as starting processes in a specific order, monitoring the message pump of another process to detect lock conditions, and allowing the utility to accept or transmit process information over a network connection.

Raspberry Pi 4 Announced

Raspberry Pi 4
Raspberry Pi 4

The fourth generation of the Raspberry Pi has been announced. Each generation of the Raspberry Pi is primarily identified by its specifications. (Not including the Raspberry Pi Compute module because it generally is not used by hobbyist). With the Raspberry Pi 4, this isn’t the case. There are three variations available. The new Raspberry Pi 4 comes with a 1.5 GHz ARM Cortex-A72 quad-core processor.  With that processor the Raspberry Pi 4 can decode 4K video at 60 FPS or two 4K videos at 30 FPS. The amount of RAM available to the unit depends on the version. The smallest amount of RAM, 1 gig, is available for $35 USD. The next size, 2 gigs, can be purchased for $45 USD. The largest unit, 4 gigs, is $55 USD.

At first glance, the unit will be recognized as a Raspberry PI but a closer look at the ports will show some immediate differences. The Pi has converted from a micro-USB port to USB-C. The full sized HDMI port is gone and has been replaced with two micro-HDMI ports. The unit can drive two displays at once.  A couple of the 4 USB ports have been upgraded to USB 3 while the other two are still USB 2. The wireless capabilities are upgraded to use USB 5.0 and dual-band 802.11ac Wi-Fi.

 

The unit is available for purchase from Raspberry Pi’s site now.  A new case for the Pi 4 and a USB-C power supply of appropriate wattage are both available through the site as well.

 

https://www.raspberrypi.org/products/raspberry-pi-4-model-b/

Raspberry Pi 4 on Amazon

 

Rotation Notations

I was writing some code to perform some celestial calculations.  A lot of it handled changes in positions from certain rotations (orbits, revolutions).  There are also instances where time is treated as a rotation (ex: 1 hour of rotation is 15 degrees).  The best notation for the rotation depends on what is being done.  Here are the rotation notations that might be used.

  • Radians
  • Degrees
    • Decimal Degrees
    • Degrees, Minutes, Seconds
  • Hours
    • Decimal Hours
    • Hours, Minutes, Seconds

Conversion from one to another is easy.  What I did find challenging is ensuring that the right conversion had been performed before working with it.  The trig functions expect to always receive radians.  More than once I made the mistake of converting to the wrong unit before performing a calculation.  Rather then continue forward on a path that has many opportunities for mistakes I made a single class to represent rotations that can be used in various scenarios.  It will always internally represent rotations in degrees.  If I want to explicitly convert the class to a specific type there are methods to explicitly convert to any of the other rotation types.

Instances of this custom type also can be assigned a preferred notation. This preferred notation is used when printing it to the output stream. This allows a preferred format to be assigned without risking making any conversion mistakes.

The interface for the class and the support class follows.

#include <stdio.h>
#include <cmath>
#include <iostream>

typedef double Degree;
typedef double  Hour;
typedef double Minute;
typedef double Second;
typedef double Radian;

enum RotationNotation {
    NOTATION_DEGREES, 
    NOTATION_DMS, 
    NOTATION_HOURS, 
    NOTATION_HMS, 
    NOTATION_RADIANS
};

class Rotation;

struct HMS {
    Hour H;
    Minute M;
    Second S;
};


struct DMS {
    Degree D;
    Minute M;
    Second S;
} ;

std::ostream& operator << (std::ostream& o, const HMS& h);
std::ostream& operator << (std::ostream& o, const DMS& d);
std::ostream& operator << (std::ostream& o, const Rotation a);


double sin(const Rotation& source);
double cos(const Rotation& source);

Hour RadToHour(const Radian );
Hour HMSToHour(const HMS& hms);
Hour DMSToHour(const DMS&);
Hour DegToHour(const Degree degrees);

DMS RadToDMS(const Radian);
DMS DegToDMS(const Degree degrees);
DMS HourToDMS(const Hour hour);
DMS HMSToDMS(const HMS&);

HMS RadToHMS(const Radian);
HMS DegToHMS(const Degree degrees);
HMS DMSToHMS(const DMS&);
HMS HourToHMS(const Hour);

Degree RadToDeg(const Radian);
Degree DMSToDeg(const DMS& );
Degree HMSToDeg(const HMS&);
Degree HourToDeg(const Hour hour);

Radian HourToRad(const Hour);
Radian HMSToRad(const HMS& );
Radian DMSToRad(const DMS& );
Radian DegToRad(const Degree);


class Rotation { 
    private:
        Degree _degrees;
        RotationNotation _notation;
    public:
        Rotation();
        Rotation(const Rotation& source);
        
        RotationNotation getNotation() const;
        void setNotation(RotationNotation);

        const Degree getDegrees() const;
        const Hour getHours() const ;
        const Minute getRadians() const ;
        const DMS getDMS() const ;
        const HMS getHMS() const ;


        void setDegrees(const Degree degree) ;
        void setHours(const Hour hour)  ;
        void setRadians(const Radian rad)  ;
        void setDMS(const DMS& dms)  ;
        void setHMS(const HMS& hms)  ;
};

Download Code 2.0 KB

-30-

What’s In My Bag? Windows To Go: Windows on USB

When I’m travelling for work there are a number of items that I make sure are in my travel bag.  These include a USB-C charger (almost all of my electronics can charge over USB-C now); a copy of any recent projects I’ve worked on (sometimes I need to hop in to help a team member); and a computer.

The operating system on that computer may vary.  Sometimes I travel with a Windows machine, sometimes a Linux machine, and other times a Mac.  Regardless of the operating system, I usually always have a Windows To Go drive.

The last item is something that is probably a little more obscure.  Since Windows 8, there have been a special type of USB drives that are different in one aspect: they appear as a fixed drive to the computer, even though they are connected to a USB port.  These drives were specifically made for making a portable Windows experience on a USB drive.

It is possible to make bootable Windows environments on other USB drive, but there are some differences.  If you have a Windows ISO you can make a bootable Windows USB drive with a number of tools.  I recommend using Rufus to make the drive.  Though there are other options (including one that is a part of Windows Enterprise Edition), Rufus doesn’t care much about the drive properties.  It will just write the data to the drive in a bootable format.

With any type of USB drive you’ll be able to boot up with little to no trouble and do initial setup on the drive.  The difference will show up when you start installing programs.  Some programs will only install to a fixed drive.  Visual Studio is one such program.  If you have a USB drive that isn’t Windows To Go certified, then chances are that it will appear as a removable drive to the computer.  Visual Studio will not install to a removable drive.

With a non-certified drive it will generally refuse to install.  If you know that the programs of interest to you don’t care about the drive type, there’s a couple of other reasons why you still may want to consider a Windows To Go certified drive.  One is performance. There was a minimum performance requirement that these drives had to achieve as a part of their certification.  However, now there are other solid state drives available that are much faster than the available Windows To Go drive (such as the Thunderbolt 3 only Samsung X5 drives).  Another consideration is security.  Some of the Windows To Go drives have hardware implemented encryption and include the option of voiding the contents of the drive under some conditions that you can define (such as the wrong password being entered at bootup too many times).

The best practice, if you plan to work with any sensitive data, is to not store it on a portable drive, if possible. But if you must, then encryption is an uncompromising need. Whether or not a Windows To Go drive is necessary for you may only be known after you review your needs.

One significant drawback of Windows To Go drives is you cannot perform a major Windows Update on it. The installation can receive Windows security updates though.  When there is a major Windows Update if you want to install it, it’s necessary to format the entire drive and start from scratch.

For my needs, I have a Super Talent 128 GB USB 3.0 drive (for speed) and a Western Digital 500 GB mechanical drive (much slower, but I can work with larger projects using it).  If you choose to do this with a certified drive, make sure you read the drive’s instructions, before you begin writing your Windows Image to it.  Some drives come with their own software that must be used for making the image and if you start off formatting the drive then you’ve already destroyed the software that you need (and it may not be readily available for download from the company’s website).

If your project needs call for a Windows To Go certified drive, I’ve found 4 available on Amazon.  Here are the links to them (affiliate links).

//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=qf_sp_asin_til&ad_type=product_link&tracking_id=j2inet-20&marketplace=amazon&region=US&placement=B00CFI402O&asins=B00CFI402O&linkId=bc4f9901d4fa29a7092cb52f6eb0ea41&show_border=false&link_opens_in_new_window=false&price_color=333333&title_color=0066C0&bg_color=FFFFFF //ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=tf_til&ad_type=product_link&tracking_id=j2inet-20&marketplace=amazon&region=US&placement=B00A48LB6K&asins=B00A48LB6K&linkId=669d2905a2a87f408df07b5c89e7cfec&show_border=false&link_opens_in_new_window=false&price_color=333333&title_color=0066c0&bg_color=ffffff //ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=tf_til&ad_type=product_link&tracking_id=j2inet-20&marketplace=amazon&region=US&placement=B00DWCPDHS&asins=B00DWCPDHS&linkId=0e21eefd6195691755311fa96a5ddc4c&show_border=false&link_opens_in_new_window=false&price_color=333333&title_color=0066c0&bg_color=ffffff //ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=tf_til&ad_type=product_link&tracking_id=j2inet-20&marketplace=amazon&region=US&placement=B015GGR14A&asins=B015GGR14A&linkId=9f8fb3d7f7a3bb009fda0934eaf181e1&show_border=false&link_opens_in_new_window=false&price_color=333333&title_color=0066c0&bg_color=ffffff

-30-

NodeJS on BrightSign

When I left off I was trying to achieve data persistence on a BrightSign  (model XT1144) using the typical APIs that one would expect to be available in an HTML application. To summarize the results, I found that using typical methods of checking localStorage and indexedDB show as being available; but indexedDB isn’t actually available; and localStorage appears to work, but doesn’t survive a device reset.

The next method to try is NodeJS.  The BrightSign devices support NodeJS, but the entry point is different than a standard entry point of a NodeJS project. A typical NodeJS project will have its entry point defined in a JavaScript file. For BrightSign, the entry point is an HTML file. NodeJS is disabled on the BrightSign by default. There is nothing in BrightAuthor that will enable it. There is a file written to the memory card (that one might otherwise ignore when using BrightAuthor) that must be manually modified. For your future deployments using BrightAuthor, take note that you will want to have the file modification described in this article saved to a back-up device so that it can be restored if a mistake is made.

The file, AUTORUN.BRS, is the first point of execution on the memory card. You can look at the usual function of this file as being like a boot loader; it will get your BrightSign project loaded and transfer execution to it. For BrightSign projects that use an HTML window the HTML window is actually created by the execution of this file. I am not going to cover the BrightScript language. For those that were ever familiar with the language, it looks very much like a variant of the B.A.S.I.C. language. When an HTML window is being created it is done with a call to the CreateObject method with “roHtmlWidget” as the first parameter to the function. The second parameter to this call is a “rectangle” object that indicates the coordinates at which the HTML window will be created. The third (optional) parameter is the one that is of interest. The third parameter is an object that defines options that can be applied to the HTML window.  The options that we want to specify are those that enable NodeJS, set a storage quota, and define the root of the file system that we will be accessing.

The exact layout of your Autorun.js may differ, but in the one that I am currently working with, I have modified the “config” object by adding the necessary parameters. It is possible that in your AutoRun.brs that the third parameter is not being passed at all. If this is the case, you can create your own “config” object to be passed as a third parameter. The additions I have made are in bold in the following.

is = {
    port: 3999
}    
security = {
        websecurity: false,
        camera_enabled: true
}
    
config = {
    nodejs_enabled: true,
    inspector_server: is,
    brightsign_js_objects_enabled: true,
    javascript_enabled: true,
    mouse_enabled: true,
    port: m.msgPort,
    storage_path: "SD:"
    storage_quota: 1073741824            
    security_params: {
        websecurity: false,
        camera_enabled: true
    },
    url: nodeUrl$
}
    
htmlWidget = CreateObject("roHtmlWidget", rect, config)

Once node is enabled the JavaScript for your page will run with the capabilities that you would generally expect to have in a NodeJS project. For my scenario, this means that I now have acces to the FS object for reading and writing to the file system.

fs = require('fs');
var writer = fs.createWriteStream('/storage/sd/myFile.mp4',{defaultEncoding:'utf16le'});
writer.write("Hello World!\r\n");
writer.end()

I put this code in an HTML page and ran it on a BrightSign. After inspecting the SD card after the device booted up and was on for a few moments I saw that my file was still there (Success!).  Now I have a direction in which to move for file persistence.

One of the nice things about using the ServiceWorker object for caching files is that you can treat a file as either successfully cached or failed. When using a file system writer there are other states that I will have to consider. A file could have partially downloaded, but not finished (due to a power outage; network outage; timeout; or someone pressing the reset button; etc.). I’m inclined to be pessimistic when it comes to guaging the reliability of external factors to a system. I find it necessary to plan with the anticipation of them failing.

With that pessimism in mind, there are a couple of approaches that I can immediately think to apply to downloading and caching files.  One is to download files with a temporary name and change the name of the file from its temporary to permanent name only after the download is successful. The other (which is a variation of that solution) is to download the file structure to a temporary location. Once all of the files are downloaded, I could move the folder to its final place (or simply change the path at which the HTML project looks to load its files). Both methods could work.

I am going to try some variations of the solutions I have in mind and will write back with the results of one of the solutions.

-30-

NEWS:Linux on Dex Coming to More Devices

LoD

Samsung has announced that Linux on Dex is coming to more devices. Previously it was only available on non-LTE models of the Galaxy Tab S4 and on the Galaxy Note 9. Per an email that Samsung sent on Monday support is coming to the Android Pie builds of the  S9, S9+, S10e, S10+, Tab S4, and Tab S5e.

Based on interaction with others (and also being my own personal story) there are owners of the TAB S4 that haven’t yet received Linux on Dex support that wait with anticipation for support to come. I’ve not been able to confirm compatibility yet as the Pie build of Android isn’t yet available for my device. The Linux on Dex page had previously stated that none of the LTE Tab S4 models were supported. The page now only states that the Verizon LTE tablets are not supported.  I hope that this means that support for my device is coming. For now the only option is to wait.

Update (2019-April 30): Today I received the Android Pie update for the Galaxy Tab S4. It does indeed have support for Linux on Dex (finally!).

Current Bright Sign Models

There are 4 main units in the BrightSign product line (there are a few others available for hardware integrators, but I’m ignoring these for now and am only looking that the units in their own cases).

LS Line

The LS line of the bright sign players is compact. It is idea when working with a single HD stream at up to 60 frames per second. It also offers a single USB port for connecting to other peripherals.

Models

slide

HD Line

The HD line can decode a single 4K video stream. With the HD line of players a GPIO port is also added to allowing additional hardware to be connected to the player for other forms of interaction.

Models

slide

XD Line

These units are set apart from the HD line in being capable of decoding up to 2 4K video streams and have an improved HTML rendering capabilities.

Models

slide

XT Line

These are the most capable Brightsign units, able to decode two 4K video streams at once. Some of the units in this family also feature an HDMI in allowing them to mix in video from another source with content. These units have 2 USB ports (USB-A and USB-C). They can also be powered via PoE.

Models

 

slide

 

 

 

 

Brightsign: An Interesting HTML Client

Of the many HTML capable clients that I’ve worked with, among them is an interesting family of devices called BrightSign Media Players (made by Roku). One of the things that they do exceptionally well is playing videos.  They support a variety of codecs for that job and it’s easy to have them go from one video to another in response to a network signal, a switch being pressed, or a touch on the screen. I’m using the XT1143 and XT1144 models for my test.

For simple video-only projects, the free tool BrightAuthor works well.  But the scenarios that this is best fit for seem to be scenarios with relatively low amounts of navigation complexity.  For projects with higher amounts of navigational complexity or projects that require more complex logic in general, an HTML based project may be a better choice.

After working on a number of HTML projects for BrightSign, I’ve discovered some of the boundaries of what can and can’t be done to have a different shape than on some other platforms.  I have seen that there are some things that while taxing to other devices, work well on the BrightSign players; and other things that don’t work so well on BrightSign that will work fine on other players.

The BrightSign HTML rendering engine in devices with recent firmware is based on Chromium.

BrightSign Firmware Version Rendering Engine
4.7-5.1 Webkit
6.0-6.1 Chromium 37
6.2-7.1 Chromium 45
8.0 (not released yet) Chromium 65

You can encounter some quirky rendering behaviour if you are on a device with an older firmware version.  At the time that I’m writing this the 8.0 firmware isn’t actually available (coming soon). I’ve found that while the device can render SVG, if I try to animate SVG objects, then performance can suffer greatly.  It is also only possible to have no more than two media items playing at a time (audio or video).  If an attempt is made to play more than two items, then the third item will be queued and will not begin to play until the previous two complete playing.

Rather than spend a lot of time developing and testing something in Chrome before deploying it to a BrightSign, it is better if you start off testing your code on the BrightSign as soon as possible.  The normal deployment process for code that is to run from the BrightSign is to copy a set of files to an HTML card, insert it into the BrightSign, reboot it, and wait a minute or two for the device to start up and render your content.

Compared to something being tested locally ,where you can just hit a refresh button to see how it renders, this process is way too long.  A better alternative, if your development machine and BrightSign are on the same network and sub-net is to make a BrightSign presentation that contains an HTML widget that points back to your development machine.  You’ll need to have a web server up and running on your machine and use the URL for the page of interest in the BrightSign presentation.  You will also want to make sure that you have enabled HTML debugging.  This is necessary for quick refreshes of the page.

When the BrightSign boots up, if everything is properly configured you should see your webpage show up.  You’ve got access to all of the BrightSign specific objects even though the page is being  served from another page.  You can inspect the elements of the page or debug the JavaScript by opening a Chrome browser on your development machine and browsing to the IP address of the BrightSign, port 2999.  Note that only one browser tab instance can be debugging the code running on the BrightSign.

The interface that you see here is identical to what you would see in the Chrome Developer Console when debugging locally.  If you make a change to the HTML, refreshing the page is simply a matter of pressing [CTRL]+[R] from the development window.  This will invoke a refresh on the BrightSign too.

I’ll be working on a BrightSign project over the course of the next few weeks and will be documenting some of the other good-to-know things and things that do or don’t work well on the devices.

-30-

Is it Really a Hologram

Photography and Holography, A Brief History

I was having a discussion  about some recent articles from a few blogs. The articles were speaking of what they labeled as holograms. But some of the articles didn’t actually have anything to do with holography despite their stated subjects. The question came up “What is a hologram.” I think the answer to that question can be better understood by contrasting it against photography and a brief presentation of the differences of the principals of the two technologies.

Development of Photography and Optics

As with many technologies the contributions that led to photography were developed by incremental discovers and developments over a long period of time. Principals of photography were developed well before those of holography. One of the earliest devices related to photography was the camera obscura (Latin, camera meaning chamber or room, obscura meaning dark). When we think of cameras now we typically don’t think of rooms in a building. The usage of many words change over time and our usage of the word camera today evolved from this usage. A camera obscura can refer to a room or a box with light blocked off except for a single hole through which light is allowed to enter. An upside down image of the scene outside the room is projected on the wall to the opposite of the hole. The earliest known writings that mention such a device are found in the writings of the philosopher Mozi. Mozi asserted from the camera obscura that light travels in a straight line. His followers developed an optic theory based on this.

Camera Obscura Picture

Image Credits: Wikipedia

There had been two prevailing theories of how vision worked. The emissions theory of vision hypothesized that the eyes emitted something and that we were able to see these emissions had to collide with the object being perceived. It was supported by people such as Ptolemy and Euclid The intromission theory of vision (supported by Aristotle) hypothesized that physical forms of the object were entering one’s eye. Around 1011 – 1020 C.E. the “Book of Optics” was written by Alhazen. His was that from an illuminated object light of different colors would travel from the object in every direction. Thought experiments with lenses and mirrors he developed a more complete theory on how light travels. He could not answer the question of how that light formed an image on an eye. Kerplar addressed the question of how images form. Kerplar also saw the human mind as playing an active role in the perception of images.

It had already been known that exposure to light would change the color of certain substances. During the year 1727 in German Johann Heinrich Schulze published the results of experiments that showed that the darkening of silver salts was due to exposure to light. The first person to capture images with through such a process was Thomas Wedgwood. But his images were less than permanent as they would fade with further exposure to light. It wasn’t until 1826 that Joseph Nicéphore was able to create the first permanent image. He used a camera obscura with a eight hour exposure time through a process he called heliography (Greek, helio from sun and -graphy from writing/message. Joseph Nicéphore partnered with Louis-Jacques-Mande Daguerre to improve the process and carried on with the work on improving the contrast after Nicéphore’s death. Henry Fox Talbot had independently developed a process for fixing silver salts only to find that Daguerre had accomplished this before him. Nevertheless he sent a paper to the Royal Institution titled “Some Account of the Art of Photogenic Drawing.” In his process a negative image was captured and that negative image was later copied to a positive image. By contrast for the daguerreotypes direct images were created. The images from daguerreotypes were sharper, but the production of the negative for Talbot’s two step process allowed unlimited positive images to be produced from the negative. The first daguerreotypes camera was produced in 1839.

Early Daguerreotype camera

Image Credits: Wikipedia

With the process improvements instead of an exposure that lasted for hours in a dark room exposures took minutes for a portable box. Instead of a shutter a lens cap was removed from the front of the device. As film became more sensitive exposure times were reduced from minutes to seconds. A mechanical shutter was added to better control exposure times. In 1885 George Eastman started producing paper film. By 1889 he changed to using celluloid film. Eastman decided to sell cameras at a loss expecting to make money back from the sales of film. The first camera was called the “Kodak.” In 1975 Kodak engineer Steven Sassonmade a camera with an electronic sensor. The images were captured at a resolution of 0.1 megapixel. He also combined the sensor with parts from a movie camera to save series of images to a cassette tape that could be viewed on a TV monitor. Twenty five years later flash memory started to replace the use of film and magnetic.

Beginning of 3D Imaging

The same year that the first daguerreotype camera was produced Sir Charles Wheatstone invented the reflecting mirror stereoscope. He used mirrors at 45-degrees to to the viewer’s eyes so that each eye would see a slightly different drawing. Through binocular depth perception the two images were experienced by the person as a single three dimensional scene.

Mirror stereoscope

Image Credits: Wikimedia

The same year David Brewster created a simple stereoscope crediting the idea to a teacher of mathematics named Elliot that is said to have come up with the idea in 1823. Brewster improved upon the stereoscope concept with the lenticular (lens based ) stereoscope (also known as the Brewster Stereoscopes). Taking the design to France improvements were made on the stereoscope by Jules Duboscq with the creation of the stereoscopic daguerreotypes. In 1861 Oliver Wendell Holmes made a verion of the stereoscope that was easier to produce. Patend in 1939 was the View-Master Stereoscope. In 1950 a device titled the “Sensorama” was created designed to present stereoscopic motion pictures, smells, feelings, and sound. About the same time Douglas Engelbart (inventor of the mouse) was experimenting with using screens as input and output devices. In 1968 the first system that would be describe in modern terminology as “augmented reality” was created by Ivan Sutherland and Bob Sproull. It was heavy and the headset had to be suspended from the ceiling. The graphics it displayed were wire frames.

Viewmaster: one version of a Stereoscope

View Master Image Credits:Wikimedia

Holmes Stereoscope

Holmes Stereoscope Image Credits: Wikimedia

Development of Holograms

The development of holograms occurred much more recently in history. In 1947 Dennis Gabor developed holographic theory. His efforts were to improve the quality of images from electron microscopes. In electron holography a subject is placed in a diverging electron beam (This would be a good place to talk about Quantum coherence, or it may be bad to talk about it at all).. Electrons that are scattered by the object and electrons undisturbed by the object both strike a detector and create an interference pattern with each other. An image of the object is constructed by this interference pattern. Holograms made with light didn’t occur at the time in part because of the properties of available light sources. Many light sources emit light that falls across a spectrum of wavelengths (colors) and are not a single pure color. It wasn’t until 1960 that such a light source became available through the work of N. Bassov and A. Prokhorov, and Charles Towns with the development of the laser. Light emitted from a laser has two important properties that are vital to making holograms. The light is monochromatic (a single pure color) and the light is coherent. One might wonder if single color light is needed why not add a color filter to a light bulb. Most light filters will reduce but not necessarily completely eliminate the other wavelengths of light. It’s also not coherent. (note: LED lighting achieves being monochromatic without being coherent).

Components of original Ruby Laser

Components of original Ruby Laser. Image credits Wikimedia.

I really don’t like this explanation. I need to work on this Understanding light coherence requires a bit of knowledge of the wave-particle duality of light. Those that performed the double-slit experiement in a physics class may remember discussing this. Consider ripples on the surface of water from a pebble being dropped in. If you could view a cross section of one of the waves you’ll see the ripples form crests and troughs. If two pebbles are dropped in close to each other at the same time waves will emanate from two spots and overlap and interfere with each other. There will be areas where two crests coincide together forming an even higher crest (constructive interference). There will be areas where two troughs coincide forming a lower trough (constructive interference). And there will be areas where a trough from one wave and a crest from the other coincide resulting in the water being at the same level it was at before there were any waves (destructive interference). The distance between corresponding parts of the wave (ex: from one crest to another) is the “wave length.” The number of crests or troughs that occur over some period of time is the wave frequency. This same principal occurs with sound waves. Noise cancelling headphones use destructive interference to reduce the intensity of the sound-waves reaching the ears of those wearing them. The same principal is also applicable to light. Most normal light sources

One process for producing light holograms is similar to that of electron holography. Instead of using a detector being hit with undisturbed and scattered electrons a detector is hit with undisturbed and scattered particles of light (photons). The detector is a holographic plate. Because slight movement of the subject being holographed, the light source and optics, or the holographic plate would change the interference pattern it’s necessary for all of these parts to be absolutely still when the “image” is being made. After the exposure of the holographic plate it can be fixed/developed so that further exposure to light won’t damage recording.

Looking at an object through a hologram is like looking at an object through a window. If you took a hologram and broke it in half it doesn’t prevent you from being able to see the hologram. It’s analogous to reducing the size of the window through which could look by painting over part of it. While you can still see outside the number of angles from which you can view the scene is reduced. If you move your head to the left or right your perspective of the objects holographed will change which contributes to the perception of depth. Each observer of a hologram will see it from her own perspective. Each eye having it’s own perspective provides the stereoscopic depth queue.

Is it Really a Hologram

20190301_222726.jpg

Returning to the discussion that inspired this entire post, when I was commenting on an article I was surprised that the article that mentioned holograms in it’s title was actually about holograms. Most articles I’ve come across mentioning holograms are not about holograms. What about the Hololens? It is described as being “the first fully untethered, holographic computer, enabling you to interact with high‑definition holograms in your world.” Are these really holograms? No, they are not holograms in the same sense as the word is used in holography. Computer based systems are full of terms and names that have been borrowed from other items and concepts. We often use these terms without thinking much about them. An audio streaming application isn’t really a radio. The root graphical interface on my computer isn’t really a desktop. There’s a long tradition of adopting terms as a metaphor. After those terms are used long enough they they come to denote the item for which they have been used. Internet radio isn’t radio, but it may have some of the experience of using a radio. The root graphical interface of some computers has been called the “Desktop” for 34 years at the time that I”m writing this. Similarly the images viewed through the Hololens are not from holography. But there are elements of the experience of viewing a hologram that one has with the Hololens. If you move your head from left to right the perspective of the object changes. There are several perceptual depth queues experiences including the images being stereoscopic, parallax, and perspective transformations of the object represented. The use of the word seems to communicate well what to expect from the experience; the presence of an image with depth.

 

BigInt in JavaScript

As a developer, there are some problems for which I get enjoyment out of solving.  There are some problems for which JavaScript had not been my tool of choice because of its limits on precision of the Number type.  That is no longer the case with the JavaScript type BigInt.  The number of bytes used to store a BigInt scales with the magnitude of the number.  On some browsers the following JavaScript code will show a difference between Number and BigInt.  The value in the BigInt variable increases as one would naturally expect it to.  The value in the Number variable will stay the same.

var myBigInt = BigInt(Number.MAX_SAFE_INTEGER);
var myBigResult;
console.log('BigInt value ', myBigInt);
myBigResult = myBigInt * 4n;
console.log('BigInt value * 4 = ', myBigResult);

var myNumber = Number.MAX_SAFE_INTEGER-0.9;
var myResult;
console.log('Number value ', myNumber);
myResult = myNumber *4 ;
console.log('Number value * 4 = ', myNumber);

The output for the above was as follows:

BigInt value  9007199254740991n
BigInt value * 4 =  36028797018963964n
Number value  9007199254740990
Number value * 4 =  9007199254740990

For any operation that involves values that are beyond the maximum safe integer value, the resulting value could be wrong. It is also possible to have values that appear identical when printed as a sting, but are unequal to each other when compared.  BigInt literals are expressed as an integer number suffixed with a lowercase ‘n’.  If you use the typeof operator on a BigInt the string 'bigint‘ is returned.

While there are no additional floating number types that offer high precision, BigInt can be used for some types of calculations.  For example, if you needed a big decimal value for money calculations  you could use BigInt and have your presentation of the results take into account that the number type is not storing a decimal position.  For example, if the result of a calculation were 1234 when printing the number it could be converted to a string and a period could be inserted into the right position producing the string 12.34 to the user.

The BigInt type is supported in Chrome 67.  Apple added support for Safari version 12.  Mozilla is currently working on support.  Microsoft is also working on an implementation.

 

‘main’: input parameter ‘input’ missing semantics

If you’ve received the error message “input parameter ‘xxxx’ missing semantics” in a shader the cause of the error is a missing piece of information on one of your parameters or structures. Here is an example of a shader that will produce that error.

struct VSIn {
	float3 position;
	float4 color;
};

struct PS_IN
{
	float3 position:SV_POSITION;
	float4 color:COLOR;
};

PS_IN main(VSIn input)
{
	PS_IN output;
	output.position = input.position;
	output.color = input.color;
	return output;
}

Here the correction would be to add the semantics POSITION and COLOR to the shader. The corrected shader looks like the following.

struct VSIn {
	float3 position:POSITION;
	float4 color:COLOR;
};

struct PS_IN
{
	float3 position:SV_POSITION;
	float4 color:COLOR;
};

PS_IN main(VSIn input)
{
	PS_IN output;
	output.position = input.position;
	output.color = input.color;
	return output;
}

SSDP Discovery in HTML

While implementing a few projects I decided to implement them in HTML since it would work on the broadest range of my devices of interest. My projects of interest needed to discover additional devices that are connected to my home network. I used
SSDP for discovery.

SSDPDiscovery

SSDP (Simple Service Discover Protocol ) is a UDP based protocol that is a part of UPnP for finding other devices and services on a network. It’s implemented by a number of devices including network attached storage devices, Smart TVs, and home automation systems. There are a lot of these devices that expose functionality through JSON calls. You can easily make interfaces to control these devices. However, since the standards for HTML and JavaScript don’t include a UDP interface, how to perform discovery isn’t immediately obvious. Alternatives to SSDP include having the user manually enter the IP address of the device of interest or scanning the network. The latter of those options can raise some security flags when performed on some corporate networks.

For the most part, the solution to this is platform dependent. There are various HTML based solutions that do allow you to communicate over UDP. For example, the BrightSign HTML5 Players support UDP through the use of roDatagramSocket. Chrome makes UDP communication available through chrome.udp.sockets. Web pages don’t have access to this interface (for good reason, as there is otherwise potential for this to be abused). Although web apps don’t have access, Chrome extensions do. Chrome Extensions won’t work in other browsers. But at the time of this writing Chrome accounts for 67% of the browser market share and Microsoft has announced that they will use Chromium as the foundation for their Edge browser. While this UDP socket implementation isn’t available in a wide range of browsers, it is largely available to a wide range of users since this is the browser of choice for most desktop users.

To run HTML code as an extension there are two additional elements that are needed: a manifest and a background script. The background script will create a window and load the starting HTML into it.

chrome.app.runtime.onLaunched.addListener(function() {
    chrome.app.window.create('index.html', {
        'outerBounds': {
        'width': 600,
        'height': 800
        }
    });
});

I won’t go into a lot of detail about what is in the manifest, but I will highlight its most important elements. The manifest is in JSON format. The initial scripts to be run are defined app.background.scripts. Other important elements are the permission element, without which the attempts to communicate over UDP or join a multicast group will fail and the manifest_version element. The other elements are intuitive.

        {
            "name": "SSDP Browser",
            "version": "0.1",
            "manifest_version": 2,
            "minimum_chrome_version": "27",
            "description": "Discovers SSDP devices on the network",
            "app": {
              "background": {
                "scripts": [
                  "./scripts/background.js"
                ]
              }
            },
          
            "icons": {
                "128": "./images/j2i-128.jpeg",
                "64": "./images/j2i-64.jpeg",
                "32": "./images/j2i-32.jpeg"
            },
          
            "permissions": [
              "http://*/",
              "storage",
              {
                "socket": ["udp-send-to", "udp-bind", "udp-multicast-membership"]
              }
            ]
          }    

Google already has a wrapper available as a code example chrome.udp.sockets that was published for using Multicast on a network. In it’s unaltered form the Google code sample assumes that text is encoded in the 16-bit character encoding of Unicode. SSDP uses 8-bit ASCII encoding. I’ve taken Google’s class and have made a small change to it to use ASCII instead of Unicode.

To perform the SSDP search the following steps are performed.

  1. Create a UDP port and connect it to the multicast group 239.255.255.250
  2. Send out an M-SEARCH query on port 1900
  3. wait for incoming responses originating from port 1900 on other devices
  4. Parse the response
  5. Stop listening after some time

The first item is mostly handled by the Google Multicast class. We only need to pass the port and address to it. The M-SEARCH query is a string. As for the last item, it isn’t definitive when responses will stop coming in. Some devices appear to occasionally advertise themselves to the network even if not requested. In theory you could keep getting responses. At some time I’d suggest just no longer listening. Five to ten seconds is usually more than enough time. There are variations in the M-SEARCH parameters but the following can be used to ask for all devices. There are other queries that can be used to filter for devices with specific functionality. The following is the string that I used; what is not immediately visible, is that after the last line of text there are two blank lines.

M-SEARCH * HTTP/1.1
HOST: 239.255.255.250:1900
MAN: "ssdp:discover"
MX: 3
ST: ssdp:all
USER-AGENT: Joel's SSDP Implementation
    

When a response comes in, the function that we assign to MulticastScoket.onDiagram will be called with a byte array containing the response, the IP address from which the response came, and the port number from which the response was sent (which will be 1900 for our current application). In the following code sample, I initiate a search and print the responses to the JavaScript console.

const SSDP_ADDRESS = '239.255.255.250';
const SSDP_PORT = 1900;
const SSDP_REQUEST_PAYLOAD =    "M-SEARCH * HTTP/1.1\r\n"+
                                "HOST: 239.255.255.250:1900\r\n"+
                                "MAN: \"ssdp:discover\"\r\n"+
                                "MX: 3\r\n"+
                                "ST: ssdp:all\r\n"+
                                "USER-AGENT: Joel's SSDP Implementation\r\n\r\n";

var searchSocket = null;

function beginSSDPDiscovery() { 
    if (searchSocket)
        return;
    $('.responseList').empty();
    searchSocket = new MulticastSocket({address:SSDP_ADDRESS, port:SSDP_PORT});
    searchSocket.onDiagram = function(arrayBuffer, remote_address, remote_port) {
        console.log('response from ', remote_address, " ", remote_port);
        var msg = searchSocket.arrayBufferToString8(arrayBuffer);
        console.log(msg);        
    }
    searchSocket.connect({call:function(c) {
        console.log('connect result',c);
        searchSocket.sendDiagram(SSDP_REQUEST_PAYLOAD,{call:()=>{console.log('success')}});
        setTimeout(endSSDPDiscovery, 5000);
    }});    
}

Not that parsing the response strings is difficult, by any means it would be more convenient if the response were a JSON object. I’ve made a function that will do a quick transform on the response so I can work with it like any other JSON object.

function discoveryStringToDiscoveryDictionary(str) {
    var lines = str.split('\r');
    var retVal = {}
    lines.forEach((l) => {
        var del = l.indexOf(':');
        if(del>1) {
            var key = l.substring(0,del).trim().toLowerCase();
            var value = l.substring(del+1).trim();
            retVal[key]=value;
        }
    });
    return retVal;
}    

After going through this transformation a Roku Streaming Media Player on my network returned the following response. (I’ve altered the serial number)

{
    cache-control: "max-age=3600",
    device-group.roku.com: "D1E000C778BFF26AD000",
    ext: "",
    location: "http://192.168.1.163:8060/",
    server: "Roku UPnP/1.0 Roku/9.0.0",
    st: "roku:ecp",
    usn: "uuid:roku:ecp:1XX000000000",
    wakeup: "MAC=08:05:81:17:9d:6d;Timeout=10"    ,
}

Enough code has been shared for the sample to be used, but rather than rely on the development JavaScript console,  I’ll change the sample to show the responses in the UI. To keep it simple I’ve defined the HTML structure that I will use for each result as a child element of a div element of the class palette. This element is hidden, but for each response I’ll clone the div element of the class ssdpDevice; will change some of the child members; and append it to a visible section of the page.

        
 <html>
    <head>
        <link rel="stylesheet" href="styles/style.css" />
        http://./scripts/jquery-3.3.1.min.js
        http://./scripts/MulticastSocket.js
        http://./scripts/app.js
    </head>
    <body>
Scan Network

 

</div>

address:
location:
server:
search target:

</div> </div>

</body> </html>

 

The altered function for that will now display the SSDP responses in the HTML is the following.

        function beginSSDPDiscovery() { 
            if (searchSocket)
                return;
            $('.responseList').empty();
            searchSocket = new MulticastSocket({address:SSDP_ADDRESS, port:SSDP_PORT});
            searchSocket.onDiagram = function(arrayBuffer, remote_address, remote_port) {
                console.log('response from ', remote_address, " ", remote_port);
                var msg = searchSocket.arrayBufferToString8(arrayBuffer);
                console.log(msg);
                discoveryData = discoveryStringToDiscoveryDictionary(msg);
                console.log(discoveryData);
        
                var template = $('.palette').find('.ssdpDevice').clone();
                $(template).find('.ipAddress').text(remote_address);
                $(template).find('.location').text(discoveryData.location);
                $(template).find('.server').text(discoveryData.server);
                $(template).find('.searchTarget').text(discoveryData.st)
                $('.responseList').append(template);
            }
            searchSocket.connect({call:function(c) {
                console.log('connect result',c);
                searchSocket.sendDiagram(SSDP_REQUEST_PAYLOAD,{call:()=>{console.log('success')}});
                setTimeout(endSSDPDiscovery, 5000);
            }});    
        }    

Working with non-SSL Web Services within an SSL page

I was making a Progressive Web App (PWA) and encountered a problem pretty quickly.  PWAs need to be served over SSL/HTTPS.  The services that they access must also be served over SSL (a page served over SSL cannot access non-SSL resources).  Additionally, since my app is being served from a different domain, there must be a Cross Origin Resource Sharing header permitting the application to use the data.  My problem is that I ran into a situation where I needed to access a resource that met neither of these requirements.

Failed to load http://myUrl.com: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://SomeOtherURL.com' is therefore not allowed access.

The solution to this seemed obvious: a proxy service that would consume the non-SSL feed and make the results available over HTTPS.  There exists some third party services that can do this for you (My SSL Proxy, for example).  But the services that I found were not meant for applications and generally don’t add the required CORS headers.  Implementing something like this isn’t hard, but for a lightweight application for which I wasn’t planning on making any immediate revenue, I wanted to minimize my hosting costs.  This is where two services that Google provides come into play.

The first Google service is Firebase.

Firebase (available at https://Firebase.Google.com) allows you to host static assets in the Google cloud.  These assets are servers over SSL.  This was a perfect place for hosting most of the source code that was going to run on the mobile device.

As for the service proxy, I made a proxy service that ran on the second Google service: App Engine.  Google’s cloud service App Engine (available at https://cloud.google.com/appengine/) allowed me to write my proxy service using NodeJS (available at https://nodejs.org/).  I had it query the data I needed from the non-SSL service and cache the data for 30 seconds at a time.  All of Google’s services use SSL by default, so I didn’t have to do anything special.  When returning the response I added a few headers to handle CORS requirements.  Here’s the code for the node server.  If you use it, you will need to modify it so that any parameters that you need to pass to the non-SSL service are passed through.

const http = require('http')
const port = 80;
const MAX_SCHEDULE_AGE = 30;
const SERVICE_URL=`YOUR_SERVICE_URL`

var schedule = '[]';
var lastUpdate = new Date(1,1,1);


function timeDifference(a,b) { 
    var c = (b.getTime() - a.getTime())/1000;
    return c;
}

function sendSchedule(resp) {
    resp.setHeader('Access-Control-Allow-Origin', '*');
	resp.setHeader('Access-Control-Request-Method', '*');
	resp.setHeader('Access-Control-Allow-Methods', 'OPTIONS, GET');
	resp.setHeader('Access-Control-Allow-Headers', '*');
    resp.end(schedule);
}
const requestHandler = (request,response) => {
    var now = new Date();
    var diff = timeDifference(lastUpdate, now);
    if(diff>MAX_SCHEDULE_AGE) {
        console.log('schedule is stale. updating');
        sendSchedule(response);
        return;
    }
    updateSchedule((d)=> {
        console.log('schedule updated')
        sendSchedule(response);
    });
    console.log(request.url);

}

const server = http.createServer(requestHandler);

const https = require('https');


function updateSchedule(onUpdate) { 
    https.get(SERVICE_URL, (resp) => {
        let data = '';
        resp.on('data', (chunk) => {
            data += chunk;
        });
        resp.on('end', ()=> {
            schedule = data;
            lastUpdate = new Date();
            if(onUpdate) {
                onUpdate(schedule);
            }
        })
    });
}

server.listen(port, (err) => {

    if(err) {
        return console.log('something bad happened');
    }
    console.log(`server is listening on port ${port}`);
    updateSchedule();
}) 

One of the other advantages of having this proxy service is that there is a now a layer for hiding any additional information that is necessary for accessing the service of interest.  For example, if you are communicating with a service that requires some key or app id for access, that information would never flow through to the client.

Some configuration was necessary for deployment, but not much.  I had to add a simple app.yaml file to the project.  These are the contents.

# [START runtime]
runtime: nodejs10
# [END runtime]

Deployment of the application was unexpectedly easy.  I already had the source code stored in a git repository.  App Engine exposes a Linux terminal through the browser.  I cloned my repository and typed a few commands.

$  export PORT=8080 && npm install
$  gcloud app create
$  gcloud app deploy

After answering YES to a configuration prompt, the application was deployed and running.

One might wonder why I have the code for my application hosted in two different services.  I could have placed the entire thing in App Engine.  My motivation for separating them is that I plan to have some other applications interface with the same service.  So I wanted to keep the code (for specific clients of the code) separate from the service interface.

Linux On Dex: Works on WiFi Tab S4 Models Only

Update 2018-Dec-11: I’ve spoken to a LoD team member and to jump straight to the point of you have a LTE Tab S4 then simply put the required update isn’t available at this time and there is no information on when it will be available.

Some people trying to install Linux on Dex are running into an obstacle. After installing he app and trying to run it they get the following error message.

Linux on Dex requires your device to have the latest software o support some features.

After this message is acknowledge the application closes. If someone with this error checks for updates in the app store or for updates to the operating system they get notification that everything is up to date. What’s going on? I contacted LoD support about this and got back the following response.

Currently, the Linux on DeX(beta) requires latest SW for Galaxy Note9 and Galaxy Tab S4. SW update schedule may vary depends on the region and carrier.

Currently, the Linux on DeX(beta) requires latest SW for Galaxy Note9 and Galaxy Tab S4.
SW update schedule may vary depends on the region and carrier.

What does this mean? It means that your device doesn’t have a update that is required for DeX and that your carrier might not have released it.  Devices sold through a carrier can be a bit slower in receiving their updates. Samsung hasn’t been specific on the updated needed.  I’ve communicated with someone on the Linux on Dex team and was told that LTE tablets in general do not have the update that is required for Linux on Dex. Additionally the person told me that there is no information available on when particular updates will work their way through certain carriers.

BTW: Unlocking your device and installing a SIM from another carrier will not change this; this behaviour is dependent on the carrier for which the device was made, not on the SIM that happens to be in the device at the time.

Samsung Announced Exynos 9 with NPU

 

Consistent with what they said at the developer’s conference about wanting to extend the reach of their A.I. Samsung has announced a new System on Chip (SoC) with some A.I. related features. The Exynos 9 Series 9820 processor. The processor contains an NPU, a unit for processing neural networks at speeds faster than what could be done with a general purpose processor alone. The presence of this unit on the device hardware makes possible device side experiences that would have previously required that data be sent to a server for processing. This may also translate into improvements in AR and VR experiences.

The NPU isn’t the only upgrade that comes with the processor. Samsung says the 9820’s new fourth generation custom core delivers a 20% improvement in single core performance or 40% in power efficiency compared to is predecessor. Multicore performance is said to be increased around 15%. The Exynos 9820 also has a video encoder capable of decoding 4K video at up to 150 frames per second in 10-bit color. The processor goes into mass production at the end of this year.

Source: Samsung