BrightSign HTML: Where is the Persistent Storage?

BrightSign Media Players work with a number of content management systems.  With a content management system, you can upload a BrightSign presentation as an asset and it will be distributed to the the units out in the field automatically.

Recently, I was investigating what the options are for other persistent storage.  The assets to be managed were not a full presentation, but were a few files that were going to be consumed by a presentation. As expected, the solution needed to be tolerant to a connection being dropped at any moment.  If an updated asset were to be partially downloaded, the expected behavior would be that the BrightSign continues with the last set of good assets that it had until a complete new set could be completely downloaded.

The first thing that I looked into was whether the BrightSign units supported service workers.  If they did, this would be a good area to place an implementation that would check for new content and initiate a download.  I also wanted to know what storage options were supported.  I considered indexedDB, localStorage, and caches.  The most direct way of checking for support was to make an HTML project that would check if the relevant objects were available on the window object.  I placed a few fields on an HTML page and wrote a few lines of JavaScript code to place the results in the HTML page.

Here’s the code and the results.

function main() {
    $('#supportsServiceWorker').text((navigator.serviceWorker)?'supported':'not supported');
    $('#supportsIndexDB').text((window.indexedDB)?'supported':'not supported');
    $('#supportsLocalStorage').text((window.localStorage)?'supported':'not supported');
    $('#supportsCache').text((window.caches)?'supported':'not supported');
    supportsCache
}
$(document).ready(main);
Feature Support
serviceWorker supported
indexedDB supported
localStorage supported
cache supported

Things looked good, at first.  Then, I checked the network request.  While inspection of the objects suggests that the service worker functionality is supported, the call to register a service worker script did not result in the script downloading and executing.  There was no attempt made to access it at all.  This means that service worker functionality is not available.  Bummer.

Usually, I’ve used the cache object from a service worker script.  The use of it there was invisible to the other code that was running in the application.  But with the unavailability of the service worker the code for the presentation will show more awareness of the object.  Not quite what I would like, but I know know that is one of the restrictions in which I must operate.

The Caches object is usually used by a service worker.  But the object can be used by the window, while it is defined as a part of the service worker spec, there’s no requirement that it be only used by it.

The next thing worth trying was to manually cache something and see if it could be retrieved.

if(!window.caches)
return;
window.caches.open(‘cache1’)
.then(function (returnedCache) {
cache = returnedCache;
});

This doesn’t actually do anything with the cache yet.  I just wanted to make sure I could retrieve a cache object.  I ran this locally and it ran just fine.  I tried again, running it on the BrightSign player, and got an unexpected result, window.caches is non-null, and I can call window.caches.open and get a callback.  The problem is that the callback always receives a null object.  It appears that the cache object isn’t actually supported.  It is possible that I made a mistake.  To check on this, I posted a message in the BrightSign forum and moved on to trying the next storage option, localStorage.

The localStorage option didn’t give me the results that I expected on the BrightSign. For the test I made a function that would keep what I hoped to be a persistent count of how many times it ran.

function localStorageTest() { 
    if(!window.localStorage) {
        console.log('local storage is not supported' );
        return;
    }
    var result = localStorage.getItem('bootCount0') || 0;
    console.log('old local storage value is ', result);
    result = Number.parseInt( result) + 1;
    localStorage.setItem('bootCount0', result);
    result = localStorage.getItem('bootCount0', null)
    console.log('new local storage value is ', result);
}

When I first ran this, things ran as expected.  My updated counts were saving to localStorage.  So I tried rebooting.  Instead of saving, the count reset to zero.  On the BrightSign, localStorage had a behavior exactly like sessionStorage.

Based on these results, it appears that persistent storage isn’t available using the HTML APIS.  That doesn’t mean that it is impossible to save content to persistent storage.  The solution to this problem involves NodeJS.  I’ll share more information about how Node works on BrightSign in my next post.  It’s different than how one would usually use it.

-30-

Brightsign: An Interesting HTML Client

Of the many HTML capable clients that I’ve worked with, among them is an interesting family of devices called BrightSign Media Players (made by Roku). One of the things that they do exceptionally well is playing videos.  They support a variety of codecs for that job and it’s easy to have them go from one video to another in response to a network signal, a switch being pressed, or a touch on the screen. I’m using the XT1143 and XT1144 models for my test.

For simple video-only projects, the free tool BrightAuthor works well.  But the scenarios that this is best fit for seem to be scenarios with relatively low amounts of navigation complexity.  For projects with higher amounts of navigational complexity or projects that require more complex logic in general, an HTML based project may be a better choice.

After working on a number of HTML projects for BrightSign, I’ve discovered some of the boundaries of what can and can’t be done to have a different shape than on some other platforms.  I have seen that there are some things that while taxing to other devices, work well on the BrightSign players; and other things that don’t work so well on BrightSign that will work fine on other players.

The BrightSign HTML rendering engine in devices with recent firmware is based on Chromium.

BrightSign Firmware Version Rendering Engine
4.7-5.1 Webkit
6.0-6.1 Chromium 37
6.2-7.1 Chromium 45
8.0 (not released yet) Chromium 65

You can encounter some quirky rendering behaviour if you are on a device with an older firmware version.  At the time that I’m writing this the 8.0 firmware isn’t actually available (coming soon). I’ve found that while the device can render SVG, if I try to animate SVG objects, then performance can suffer greatly.  It is also only possible to have no more than two media items playing at a time (audio or video).  If an attempt is made to play more than two items, then the third item will be queued and will not begin to play until the previous two complete playing.

Rather than spend a lot of time developing and testing something in Chrome before deploying it to a BrightSign, it is better if you start off testing your code on the BrightSign as soon as possible.  The normal deployment process for code that is to run from the BrightSign is to copy a set of files to an HTML card, insert it into the BrightSign, reboot it, and wait a minute or two for the device to start up and render your content.

Compared to something being tested locally ,where you can just hit a refresh button to see how it renders, this process is way too long.  A better alternative, if your development machine and BrightSign are on the same network and sub-net is to make a BrightSign presentation that contains an HTML widget that points back to your development machine.  You’ll need to have a web server up and running on your machine and use the URL for the page of interest in the BrightSign presentation.  You will also want to make sure that you have enabled HTML debugging.  This is necessary for quick refreshes of the page.

When the BrightSign boots up, if everything is properly configured you should see your webpage show up.  You’ve got access to all of the BrightSign specific objects even though the page is being  served from another page.  You can inspect the elements of the page or debug the JavaScript by opening a Chrome browser on your development machine and browsing to the IP address of the BrightSign, port 2999.  Note that only one browser tab instance can be debugging the code running on the BrightSign.

The interface that you see here is identical to what you would see in the Chrome Developer Console when debugging locally.  If you make a change to the HTML, refreshing the page is simply a matter of pressing [CTRL]+[R] from the development window.  This will invoke a refresh on the BrightSign too.

I’ll be working on a BrightSign project over the course of the next few weeks and will be documenting some of the other good-to-know things and things that do or don’t work well on the devices.

-30-

‘texture’ : no matching overloaded function found

I ran into this error when I was trying to use the same shader across different environments; it worked in most environments but was giving me problems in WebGL on Chrome 72. The source of the problem was that my environments were using different versions of GLSL.  In some environments I needed to use the texture() function. In some others I needed to use the texture2D() function. To keep compatibility across both environments I added a simple define at the beginning of my script.

#if __VERSION__ < 130
#define TEXTURE2D texture2D
#else
#define TEXTURE2D texture
#endif

 

 

 

Is it Really a Hologram

Photography and Holography, A Brief History

I was having a discussion  about some recent articles from a few blogs. The articles were speaking of what they labeled as holograms. But some of the articles didn’t actually have anything to do with holography despite their stated subjects. The question came up “What is a hologram.” I think the answer to that question can be better understood by contrasting it against photography and a brief presentation of the differences of the principals of the two technologies.

Development of Photography and Optics

As with many technologies the contributions that led to photography were developed by incremental discovers and developments over a long period of time. Principals of photography were developed well before those of holography. One of the earliest devices related to photography was the camera obscura (Latin, camera meaning chamber or room, obscura meaning dark). When we think of cameras now we typically don’t think of rooms in a building. The usage of many words change over time and our usage of the word camera today evolved from this usage. A camera obscura can refer to a room or a box with light blocked off except for a single hole through which light is allowed to enter. An upside down image of the scene outside the room is projected on the wall to the opposite of the hole. The earliest known writings that mention such a device are found in the writings of the philosopher Mozi. Mozi asserted from the camera obscura that light travels in a straight line. His followers developed an optic theory based on this.

Camera Obscura Picture

Image Credits: Wikipedia

There had been two prevailing theories of how vision worked. The emissions theory of vision hypothesized that the eyes emitted something and that we were able to see these emissions had to collide with the object being perceived. It was supported by people such as Ptolemy and Euclid The intromission theory of vision (supported by Aristotle) hypothesized that physical forms of the object were entering one’s eye. Around 1011 – 1020 C.E. the “Book of Optics” was written by Alhazen. His was that from an illuminated object light of different colors would travel from the object in every direction. Thought experiments with lenses and mirrors he developed a more complete theory on how light travels. He could not answer the question of how that light formed an image on an eye. Kerplar addressed the question of how images form. Kerplar also saw the human mind as playing an active role in the perception of images.

It had already been known that exposure to light would change the color of certain substances. During the year 1727 in German Johann Heinrich Schulze published the results of experiments that showed that the darkening of silver salts was due to exposure to light. The first person to capture images with through such a process was Thomas Wedgwood. But his images were less than permanent as they would fade with further exposure to light. It wasn’t until 1826 that Joseph Nicéphore was able to create the first permanent image. He used a camera obscura with a eight hour exposure time through a process he called heliography (Greek, helio from sun and -graphy from writing/message. Joseph Nicéphore partnered with Louis-Jacques-Mande Daguerre to improve the process and carried on with the work on improving the contrast after Nicéphore’s death. Henry Fox Talbot had independently developed a process for fixing silver salts only to find that Daguerre had accomplished this before him. Nevertheless he sent a paper to the Royal Institution titled “Some Account of the Art of Photogenic Drawing.” In his process a negative image was captured and that negative image was later copied to a positive image. By contrast for the daguerreotypes direct images were created. The images from daguerreotypes were sharper, but the production of the negative for Talbot’s two step process allowed unlimited positive images to be produced from the negative. The first daguerreotypes camera was produced in 1839.

Early Daguerreotype camera

Image Credits: Wikipedia

With the process improvements instead of an exposure that lasted for hours in a dark room exposures took minutes for a portable box. Instead of a shutter a lens cap was removed from the front of the device. As film became more sensitive exposure times were reduced from minutes to seconds. A mechanical shutter was added to better control exposure times. In 1885 George Eastman started producing paper film. By 1889 he changed to using celluloid film. Eastman decided to sell cameras at a loss expecting to make money back from the sales of film. The first camera was called the “Kodak.” In 1975 Kodak engineer Steven Sassonmade a camera with an electronic sensor. The images were captured at a resolution of 0.1 megapixel. He also combined the sensor with parts from a movie camera to save series of images to a cassette tape that could be viewed on a TV monitor. Twenty five years later flash memory started to replace the use of film and magnetic.

Beginning of 3D Imaging

The same year that the first daguerreotype camera was produced Sir Charles Wheatstone invented the reflecting mirror stereoscope. He used mirrors at 45-degrees to to the viewer’s eyes so that each eye would see a slightly different drawing. Through binocular depth perception the two images were experienced by the person as a single three dimensional scene.

Mirror stereoscope

Image Credits: Wikimedia

The same year David Brewster created a simple stereoscope crediting the idea to a teacher of mathematics named Elliot that is said to have come up with the idea in 1823. Brewster improved upon the stereoscope concept with the lenticular (lens based ) stereoscope (also known as the Brewster Stereoscopes). Taking the design to France improvements were made on the stereoscope by Jules Duboscq with the creation of the stereoscopic daguerreotypes. In 1861 Oliver Wendell Holmes made a verion of the stereoscope that was easier to produce. Patend in 1939 was the View-Master Stereoscope. In 1950 a device titled the “Sensorama” was created designed to present stereoscopic motion pictures, smells, feelings, and sound. About the same time Douglas Engelbart (inventor of the mouse) was experimenting with using screens as input and output devices. In 1968 the first system that would be describe in modern terminology as “augmented reality” was created by Ivan Sutherland and Bob Sproull. It was heavy and the headset had to be suspended from the ceiling. The graphics it displayed were wire frames.

Viewmaster: one version of a Stereoscope

View Master Image Credits:Wikimedia

Holmes Stereoscope

Holmes Stereoscope Image Credits: Wikimedia

Development of Holograms

The development of holograms occurred much more recently in history. In 1947 Dennis Gabor developed holographic theory. His efforts were to improve the quality of images from electron microscopes. In electron holography a subject is placed in a diverging electron beam (This would be a good place to talk about Quantum coherence, or it may be bad to talk about it at all).. Electrons that are scattered by the object and electrons undisturbed by the object both strike a detector and create an interference pattern with each other. An image of the object is constructed by this interference pattern. Holograms made with light didn’t occur at the time in part because of the properties of available light sources. Many light sources emit light that falls across a spectrum of wavelengths (colors) and are not a single pure color. It wasn’t until 1960 that such a light source became available through the work of N. Bassov and A. Prokhorov, and Charles Towns with the development of the laser. Light emitted from a laser has two important properties that are vital to making holograms. The light is monochromatic (a single pure color) and the light is coherent. One might wonder if single color light is needed why not add a color filter to a light bulb. Most light filters will reduce but not necessarily completely eliminate the other wavelengths of light. It’s also not coherent. (note: LED lighting achieves being monochromatic without being coherent).

Components of original Ruby Laser

Components of original Ruby Laser. Image credits Wikimedia.

I really don’t like this explanation. I need to work on this Understanding light coherence requires a bit of knowledge of the wave-particle duality of light. Those that performed the double-slit experiement in a physics class may remember discussing this. Consider ripples on the surface of water from a pebble being dropped in. If you could view a cross section of one of the waves you’ll see the ripples form crests and troughs. If two pebbles are dropped in close to each other at the same time waves will emanate from two spots and overlap and interfere with each other. There will be areas where two crests coincide together forming an even higher crest (constructive interference). There will be areas where two troughs coincide forming a lower trough (constructive interference). And there will be areas where a trough from one wave and a crest from the other coincide resulting in the water being at the same level it was at before there were any waves (destructive interference). The distance between corresponding parts of the wave (ex: from one crest to another) is the “wave length.” The number of crests or troughs that occur over some period of time is the wave frequency. This same principal occurs with sound waves. Noise cancelling headphones use destructive interference to reduce the intensity of the sound-waves reaching the ears of those wearing them. The same principal is also applicable to light. Most normal light sources

One process for producing light holograms is similar to that of electron holography. Instead of using a detector being hit with undisturbed and scattered electrons a detector is hit with undisturbed and scattered particles of light (photons). The detector is a holographic plate. Because slight movement of the subject being holographed, the light source and optics, or the holographic plate would change the interference pattern it’s necessary for all of these parts to be absolutely still when the “image” is being made. After the exposure of the holographic plate it can be fixed/developed so that further exposure to light won’t damage recording.

Looking at an object through a hologram is like looking at an object through a window. If you took a hologram and broke it in half it doesn’t prevent you from being able to see the hologram. It’s analogous to reducing the size of the window through which could look by painting over part of it. While you can still see outside the number of angles from which you can view the scene is reduced. If you move your head to the left or right your perspective of the objects holographed will change which contributes to the perception of depth. Each observer of a hologram will see it from her own perspective. Each eye having it’s own perspective provides the stereoscopic depth queue.

Is it Really a Hologram

20190301_222726.jpg

Returning to the discussion that inspired this entire post, when I was commenting on an article I was surprised that the article that mentioned holograms in it’s title was actually about holograms. Most articles I’ve come across mentioning holograms are not about holograms. What about the Hololens? It is described as being “the first fully untethered, holographic computer, enabling you to interact with high‑definition holograms in your world.” Are these really holograms? No, they are not holograms in the same sense as the word is used in holography. Computer based systems are full of terms and names that have been borrowed from other items and concepts. We often use these terms without thinking much about them. An audio streaming application isn’t really a radio. The root graphical interface on my computer isn’t really a desktop. There’s a long tradition of adopting terms as a metaphor. After those terms are used long enough they they come to denote the item for which they have been used. Internet radio isn’t radio, but it may have some of the experience of using a radio. The root graphical interface of some computers has been called the “Desktop” for 34 years at the time that I”m writing this. Similarly the images viewed through the Hololens are not from holography. But there are elements of the experience of viewing a hologram that one has with the Hololens. If you move your head from left to right the perspective of the object changes. There are several perceptual depth queues experiences including the images being stereoscopic, parallax, and perspective transformations of the object represented. The use of the word seems to communicate well what to expect from the experience; the presence of an image with depth.

 

BigInt in JavaScript

As a developer, there are some problems for which I get enjoyment out of solving.  There are some problems for which JavaScript had not been my tool of choice because of its limits on precision of the Number type.  That is no longer the case with the JavaScript type BigInt.  The number of bytes used to store a BigInt scales with the magnitude of the number.  On some browsers the following JavaScript code will show a difference between Number and BigInt.  The value in the BigInt variable increases as one would naturally expect it to.  The value in the Number variable will stay the same.

var myBigInt = BigInt(Number.MAX_SAFE_INTEGER);
var myBigResult;
console.log('BigInt value ', myBigInt);
myBigResult = myBigInt * 4n;
console.log('BigInt value * 4 = ', myBigResult);

var myNumber = Number.MAX_SAFE_INTEGER-0.9;
var myResult;
console.log('Number value ', myNumber);
myResult = myNumber *4 ;
console.log('Number value * 4 = ', myNumber);

The output for the above was as follows:

BigInt value  9007199254740991n
BigInt value * 4 =  36028797018963964n
Number value  9007199254740990
Number value * 4 =  9007199254740990

For any operation that involves values that are beyond the maximum safe integer value, the resulting value could be wrong. It is also possible to have values that appear identical when printed as a sting, but are unequal to each other when compared.  BigInt literals are expressed as an integer number suffixed with a lowercase ‘n’.  If you use the typeof operator on a BigInt the string 'bigint‘ is returned.

While there are no additional floating number types that offer high precision, BigInt can be used for some types of calculations.  For example, if you needed a big decimal value for money calculations  you could use BigInt and have your presentation of the results take into account that the number type is not storing a decimal position.  For example, if the result of a calculation were 1234 when printing the number it could be converted to a string and a period could be inserted into the right position producing the string 12.34 to the user.

The BigInt type is supported in Chrome 67.  Apple added support for Safari version 12.  Mozilla is currently working on support.  Microsoft is also working on an implementation.

 

Basic Hue Lighting Control: Part 2

This is the second part of a two-part post.  The first part can be found here.

At the end of the first part, I had gotten discovery of the bridge implemented and had performed the pairing of the bridge.  In this part, I will show you how to create a query for the state of the light groups and control them.

Querying Group State

I’m only allowing the modification of the state of groups of lights on the Hue.  First I need to query the bridge for what states exist.  The list of groups and the state of the group are available at `http://${this.ipAddress}/${this.userName}/groups`. Here the data in this.userName is the user name that was returned from the Hue bridge in the pairing process.  With this information I am able to create a new UI element for each group found.  I only show groups of type “room” from this response.  It is also possible that the user has grouped an arbitrary set of lights together in a group.  I don’t show these.

var hueDB = (function () {
    var db = {};
    var datastore = null;
    var version = 1;
    db.open = function (callback) {
        var request = indexedDB.open('hueDB', version);
        request.onupgradeneeded = function (e) {
            var db = e.target.result;
            e.target.transaction.onerror = db.onerror;
            var store = db.createObjectStore('bridge', { keyPath: 'bridgeID' });
        };
        request.onsuccess = function (e) {
            datastore = e.target.result;
            callback();
        };
    };

    db.getBridgeList = function () {
        return new Promise((resolve, reject) => {
            var transaction = datastore.transaction(['bridge'], 'readonly');
            transaction.onerror = function (e) {
                reject(e.error);
            };
            transaction.oncomplete = function (e) {
                console.log('transaction complete');
            };

            var objStore = transaction.objectStore('bridge');
            objStore.getAll().onsuccess = function (e) {
                console.log('bridge retrieval complete');
                resolve(e.target.result);
            };

            var bridgeList = [];


        });
    };

    db.addBridge = function (bridge) {
        console.log('adding bridge ', bridge);
        return new Promise((resolve, reject) => {
            var transaction = datastore.transaction(['bridge'], 'readwrite');
            transaction.onerror = function (e) {
                reject(e.error);
            };
            transaction.onsuccess = function (e) {
                console.log('item added');
            };
            var objStore = transaction.objectStore('bridge');
            var objectStoreRequest = objStore.add(bridge);
            objectStoreRequest.onsuccess = function (e) {
                resolve();
            };
        });
    };

    return db;
})();

Changing the State of a Light Group Attributes

There are several elements of a light group’s state that can be modified.  I’m only considering two: the brightness of the light group and whether or not the group of lights is turned on.  Both can be set with a PUT request to the bridge at the the URL http://${this.ipAddress}/${this.userName}/groups/${id}/action`.  This endpoint accepts a JSON payload.  Turning a group of lights on or off; changing the brightness; activating a scene to change the color; and many other options can be changed through this end point.  It is not necessary to specify all of the possible attributes when calling this endpoint.  If an attribute is not specified it will remain at its current state.  I have made a method named setGroupState that will be used by all other methods that make use of this endpoint.  The methods will differ in the payloads that they build and pass to this method.

    setGroupState(groupName, state) {
        var id = this.groupToGroupID(groupName);
        var reqBodyString = JSON.stringify(state);
        return new Promise((resolve, reject) => {
            fetch(`http://${this.ipAddress}/api/${this.userName}/groups/${id}/action`, {
                method: "PUT",
                headers: { "Content-Type": "application/json" },
                body: reqBodyString
            })
                .then(resp => resp.json())
                .then(jsonResp => {
                    resolve(jsonResp);
                })
                .catch(err => reject(err));
        });
    }

Of the many attributes that could be packaged in the payload are bri and on.  The on state sets whether or not the lights are turned on.  The bri attribute accepts a value in the range of 0 to 254.  Note that a value of 0 doesn’t mean off.  Zero is the value level assigned to the lowest level of illumination above off that the light will provide.

Activating Scenes

Scenes, or a collection of settings that applies to lights, can by associated with a predefined light group or with some arbitrary group of lights.  The Hue API labels scenes as either LightScene or GroupScene accordingly.  I am only working with groups scenes.  A list of all of the scenes defined on the bridge is retrievable through the the endpoint http://${this.ipAddress}/api/${this.userName}/scenes.

The object returned is a dictionary of the scene IDs and the attributes.  The scene ID is a string of what appears to be random characters.  It’s not user friendly and should only be used internally by the code and never presented to the user.   Here is a response showing only two scenes.

{
    "8AuCtLbIiEJJRNB": {
        "name": "Nightlight",
        "type": "GroupScene",
        "group": "1",
        "lights": [
            "2"
        ],
        "owner": "rF0JJPywETzJue2G8hJCn2tQ1PaUVeXvgB0Gq62h",
        "recycle": false,
        "locked": true,
        "appdata": {
            "version": 1,
            "data": "5b09D_r01_d07"
        },
        "picture": "",
        "lastupdated": "2017-01-16T23:35:24",
        "version": 2
    },
    "7y-J6Qyzpez8c2R": {
        "name": "Dimmed",
        "type": "GroupScene",
        "group": "1",
        "lights": [
            "2"
        ],
        "owner": "rF0JJPywETzJue2G8hJCn2tQ1PaUVeXvgB0Gq62h",
        "recycle": false,
        "locked": false,
        "appdata": {
            "version": 1,
            "data": "Nmgno_r01_d06"
        },
        "picture": "",
        "lastupdated": "2017-01-16T23:35:24",
        "version": 2
    }
}

To activate a scene on a group I use the same endpoint that is used for turning light groups on and off or setting their brightness level.  The JSON payload will have a single element named scene whose value is one of the cryptic looking scene identifiers above.

    activateScene(sceneID) {
        var scene;
        if(sceneID in this.sceneList) {
            var scene = this.sceneList[sceneID];
            var group = scene.group;
            var req = {scene:sceneID};
            return this.setGroupState(group,req );            
        }
    }

Application Startup

To hide some of the events that occur at startup the application has a splash screen. The splash screen is only momentarily present. During the time that it is momentarily shown the application will attempt to reconnect to the last bridge that it had connected to and will query for the available groups and scenes. This is just enough of a distraction to hide the time taken to do this additional setup.

switch
The Application Splash Screen

Installing and Running the Application

If you have downloaded the source code to your local drive, you can add the program to Chrome as an unpacked extension. In a browser instance open the URL chrome://extensions.  In the upper-left corner of this UI is a button labeled Load Unpacked.  Select this option.

unpacket
UI for loading unpacked Chrome extensions

You will be prompted to select a folder.  Navigate to the folder where you have unpacked the source code and select it.  After selecting it you will see the application in the list of installed extensions.

loadedextension

The application will now show up in the Chrome app launcher.  This may be exposed through the regular app launcher that is part of your operating system (such as the Program menu on Windows) and will also appear in Chrome itself.  Close to the address bar is a button labeled “Apps.”

applauncher
The application in the Chrome app launcher

Completing the Application

As I mentioned in the opening,  this is not meant to be a complete application.  It is only an operational starting point, creating something that is functional enough to start testing different functions in the Hue API.

I will close with mentioning some other potential improvements.  For a user running the application for the first time the setup process might be smoothed out by automatically trying to pair with the first bridge seen (if there is only one bridge seen) and prompting the user to press the link button.  This makes the setup process a two step process: start the application and press the link button on the bridge.  There could also be other people that are operating the Hue lighting at the same time that this application is running.  Periodically polling the state of the lights and light groups on the network and updating the UI accordingly would improve usability.  A user may also want to control individual lights within a group or have control over the light color.  For this a light selection UI would also need to be developed.

It took me about an evening to get this far in the development and it was something enjoyable to do during a brief pause between projects.  As such projects go, I’m not sure when I’ll get a change to return to it.  But I hope that in it’s current form that it will be of utility to you.

-30-

 

‘main’: input parameter ‘input’ missing semantics

If you’ve received the error message “input parameter ‘xxxx’ missing semantics” in a shader the cause of the error is a missing piece of information on one of your parameters or structures. Here is an example of a shader that will produce that error.

struct VSIn {
	float3 position;
	float4 color;
};

struct PS_IN
{
	float3 position:SV_POSITION;
	float4 color:COLOR;
};

PS_IN main(VSIn input)
{
	PS_IN output;
	output.position = input.position;
	output.color = input.color;
	return output;
}

Here the correction would be to add the semantics POSITION and COLOR to the shader. The corrected shader looks like the following.

struct VSIn {
	float3 position:POSITION;
	float4 color:COLOR;
};

struct PS_IN
{
	float3 position:SV_POSITION;
	float4 color:COLOR;
};

PS_IN main(VSIn input)
{
	PS_IN output;
	output.position = input.position;
	output.color = input.color;
	return output;
}

Basic Hue Lighting Control: Part 1

screenshot
Screenshot of Chrome application for controlling Hue lighting.

Continuing from the post I made on SSDP discovery with Chrome, I’m making an application that will do more than just discovery. For this post I’m going to show the starting point of a Chrome application for controlling your home Hue lighting. I’ve divided this into two parts. In this first part I’m showing the process of pairing with the bridge. In the second part I’ll control the lights.

The features that this application will implement will include bridge discovery and pairing; the power state of the light; and the brightness level of the light. There’s many other features that could still be implemented.  Given the full range of capabilities that the Hue kits support (changing color, timers, response to motion sensors, etc.) this will not be an application that utilizes the full capability of the Hue lighting sets.

Chrome Only

This application is designed to only run in Chrome. If you want to adapt it to run outside of Chrome, you can do so by first disabling SSDP discovery. (Other HTML application platforms might not support UDP for discovery.)

The other discovery methods (querying Hue’s discovery web service or asking the user to enter the IP address) can still work. A non-chrome target will also need to allow CORS to be ignored and allow communication without SSL.

What is Hue Lighting?

Hue Lighting is an automated lighting solution made by Philips. Generally the lighting kits are sold in a package that contains three LED based light bulbs and a bridge. The bridge is a device that connects to your home network with an Ethernet jack and communicates with the light bulbs.

Philips also makes free applications for iOS and Android for controlling the lights. For any Hue light the light’s brightness and whether or not it is turned on can be controlled through the applications. Some lights also allow the color temperature to be changed (adjusting the tint between red, yellow and blue). Some lights support RGB (Red, Green, Blue) parameters so that their colors can be changed.  These settings can be individually adjusted or settings for a collection of the lights can be defined together as a “scene.” When a scene is activated the state of all of the lights that make up the scene are updated. Scenes can be activated through special light switches, through an app, through a schedule, or in response to a Hue motion sensor detecting motion.

Discovery: Review and New Methods

The central piece of hardware for the Hue lighting is the Hue Bridge. At the time of this writing there are two versions of the bridge. For the functionality that this application will utilize, the differences between the two bridges will not matter. The messaging and interaction to both versions of the bridge will be the same. My UI will properly represent the bridge that the system discovers. The first version of the Hue Bridge is round. The second version of the Hue Bridge is square. In either case we must first find the bridge’s IP address before we can begin interaction.

phillipsbridge
Phillips Hue Bridge Version 1 (left) and Version 2 (right)

The Hue bridge can be discovered in multiple ways. It can be discovered using SSDP. The basics of SSDP discovery were previously discussed here. Please refer back to it if you need more detail than what is found in this brief overview.  Devices that support SSDP discovery join a multicast group on the network that they are connected to. These devices generally wait for a request for discovery to be received. An SSDP request is sent as an HTTP over UDP message and every SSDP device that receives it responds with some basic information about itself and a URL to where more information on the device can be found. Examples of some devices that support SSDP are network attached storage; set top boxes like Android TVs and Rokus; printers; and home automation kits.

Two other methods of discovering a bridge include asking the user to enter an IP address and asking for a list of IP addresses of bridges on your network through the Hue discovery service.  If you have a Hue bridge connected to your network right now you can see it’s IP address by visiting https://discovery.meethue.com/ . If you are on a shared network then you may also see IP addresses of other bridges on your network. It is also possible that not all bridges on your network are reachable.  This method is much easier to implement than SSDP based discovery. But on a network for which there is no Internet connection (whether by design or from an outage) this method will not work. The SSDP method is only dependent on the local network.

function discoverBridge() { 
    discovredHueBridgeList = [];
    fetch(' https://discovery.meethue.com')
        .then(response => response.json())
    .then(function (hueBridgeList) {
        console.info(hueBridgeList);
        hueBridgeList.forEach((item)=> {
         // each item processed here has a bridge IP address
         // and serial number exposed through item.id and 
         // item.internalipaddress
       }
     );
}

Once I have a bridge IP address I attempt to query it for more information. If communication succeeds, then I show a representation of the bridge with an icon that matches the version of the bridge that the user has. The UI layout has two images ( one named hueBridgev1 and the other hueBridgev2) I show the appropriate image and hide the other.

Pairing

Now that the bridges have been discovered, it is up to the user to select one with which to pair. After the user selects a bridge, she is instructed to press the pairing button on the bridge. While this instruction is displayed the application is repeatedly attempting to request a new user ID name from the bridge. This should be viewed more as an access token. The Hue documentation uses the term “user name” but the actual value is what appears to be a random sequence of characters. To request a user name a JSON payload with one member named devicetype is posted to the bridge. The value assigned to devicetype matters little. It is recommended that it be a string that is unique to your application. The payload is posted to http://%5Byour bridge IP address]/api. A failure response will result. This is expected. The application must repeatedly make this request and prompt the user to press the link button on the bridge.  The request will fail until the pairing button on the bridge is passed.

function pairBridge(ipAddress) {
   console.info('attempting pairing with address ', ipAddress);
   var req = { devicetype: "hue.j2i.net#browser" };
   var reqStr = JSON.stringify(req);
   var tryCount = 0;
   return new Promise(function(resolve, reject)  {
      var tryInterval = setInterval(function () {
      console.log('attempt ', tryCount);
      ++tryCount;
      if (tryCount > 60) {
        clearInterval(tryInterval);
         reject();
         return;
      }
      fetch(`http://${ipAddress}/api`, {
         method: "POST",
         headers: {
            "Content-Type": "application/json"
         },
         body: reqStr
      })
      .then(function(response)  {
         console.log('text:',response);
         return response.json();
      })
      .then(function(data)  {
          console.log(data);
          if (data.length > 0) {
             var success = data[0].success;
             var error = data[0].error;
             if (success) {
                console.log('username:', success.username);
                var bridge = {
                   ipAddress: ipAddress,
                   username: success.username
                };
                clearInterval(tryInterval);
                 resolve(bridge);
                  return;
               }
               else if (error) {
                  if (error.type === 101) {
                     console.log('the user has not pressed the link button');
                  }
               }
            }
         });
      }, 2000);
   });
}

Once the button is pressed the bridge will respond to the first pairing request it receives with a user name that the application can use. This user name must be saved and used for calls to most of the functionality that is present in the bridge. I save the bridge’s serial number, IP address, and the name that must be used for the various API calls to an indexedDB object store. The access information for multiple paired bridges could be stored in the object store at once. But the application will only be able to communicate with one bridge at a time.

Continued in Part II

My Samsung HoloLab Model Arrived

Last week I received an e-mail from Samsung regarding the HoloLab scans that I did at the 2018 Samsung Developer’s Conference.  Shortly after the conference, I wrote about the rig that was used to do the scan.

When the photographs were taken for the scan, three poses were requested.  The first pose was standing with your arms crossed.  The second pose was standing with both of your arms out to the side.  The third pose was allowed to be a freestyle that could be whatever you wanted (just for the fun of it).

Of these three poses, I was most interested in the second pose, because arms out to the side is the most appropriate pose to use when importing a model into software for animating.  Sadly, what arrived in my e-mail was only one of the three poses, the first one.

I’m still happy to have received the one pose that I did. The model definitely resembles me.  This is speculation on my part, but I imagine that the processing of the 52 images that make up a single scan is time consuming. Considering the large number of participants at the conference who had the scans done, receiving all three model poses may be wishful thinking.

 

HololabScan

SSDP Discovery in HTML

While implementing a few projects I decided to implement them in HTML since it would work on the broadest range of my devices of interest. My projects of interest needed to discover additional devices that are connected to my home network. I used
SSDP for discovery.

SSDPDiscovery

SSDP (Simple Service Discover Protocol ) is a UDP based protocol that is a part of UPnP for finding other devices and services on a network. It’s implemented by a number of devices including network attached storage devices, Smart TVs, and home automation systems. There are a lot of these devices that expose functionality through JSON calls. You can easily make interfaces to control these devices. However, since the standards for HTML and JavaScript don’t include a UDP interface, how to perform discovery isn’t immediately obvious. Alternatives to SSDP include having the user manually enter the IP address of the device of interest or scanning the network. The latter of those options can raise some security flags when performed on some corporate networks.

For the most part, the solution to this is platform dependent. There are various HTML based solutions that do allow you to communicate over UDP. For example, the BrightSign HTML5 Players support UDP through the use of roDatagramSocket. Chrome makes UDP communication available through chrome.udp.sockets. Web pages don’t have access to this interface (for good reason, as there is otherwise potential for this to be abused). Although web apps don’t have access, Chrome extensions do. Chrome Extensions won’t work in other browsers. But at the time of this writing Chrome accounts for 67% of the browser market share and Microsoft has announced that they will use Chromium as the foundation for their Edge browser. While this UDP socket implementation isn’t available in a wide range of browsers, it is largely available to a wide range of users since this is the browser of choice for most desktop users.

To run HTML code as an extension there are two additional elements that are needed: a manifest and a background script. The background script will create a window and load the starting HTML into it.

chrome.app.runtime.onLaunched.addListener(function() {
    chrome.app.window.create('index.html', {
        'outerBounds': {
        'width': 600,
        'height': 800
        }
    });
});

I won’t go into a lot of detail about what is in the manifest, but I will highlight its most important elements. The manifest is in JSON format. The initial scripts to be run are defined app.background.scripts. Other important elements are the permission element, without which the attempts to communicate over UDP or join a multicast group will fail and the manifest_version element. The other elements are intuitive.

        {
            "name": "SSDP Browser",
            "version": "0.1",
            "manifest_version": 2,
            "minimum_chrome_version": "27",
            "description": "Discovers SSDP devices on the network",
            "app": {
              "background": {
                "scripts": [
                  "./scripts/background.js"
                ]
              }
            },
          
            "icons": {
                "128": "./images/j2i-128.jpeg",
                "64": "./images/j2i-64.jpeg",
                "32": "./images/j2i-32.jpeg"
            },
          
            "permissions": [
              "http://*/",
              "storage",
              {
                "socket": ["udp-send-to", "udp-bind", "udp-multicast-membership"]
              }
            ]
          }    

Google already has a wrapper available as a code example chrome.udp.sockets that was published for using Multicast on a network. In it’s unaltered form the Google code sample assumes that text is encoded in the 16-bit character encoding of Unicode. SSDP uses 8-bit ASCII encoding. I’ve taken Google’s class and have made a small change to it to use ASCII instead of Unicode.

To perform the SSDP search the following steps are performed.

  1. Create a UDP port and connect it to the multicast group 239.255.255.250
  2. Send out an M-SEARCH query on port 1900
  3. wait for incoming responses originating from port 1900 on other devices
  4. Parse the response
  5. Stop listening after some time

The first item is mostly handled by the Google Multicast class. We only need to pass the port and address to it. The M-SEARCH query is a string. As for the last item, it isn’t definitive when responses will stop coming in. Some devices appear to occasionally advertise themselves to the network even if not requested. In theory you could keep getting responses. At some time I’d suggest just no longer listening. Five to ten seconds is usually more than enough time. There are variations in the M-SEARCH parameters but the following can be used to ask for all devices. There are other queries that can be used to filter for devices with specific functionality. The following is the string that I used; what is not immediately visible, is that after the last line of text there are two blank lines.

M-SEARCH * HTTP/1.1
HOST: 239.255.255.250:1900
MAN: "ssdp:discover"
MX: 3
ST: ssdp:all
USER-AGENT: Joel's SSDP Implementation
    

When a response comes in, the function that we assign to MulticastScoket.onDiagram will be called with a byte array containing the response, the IP address from which the response came, and the port number from which the response was sent (which will be 1900 for our current application). In the following code sample, I initiate a search and print the responses to the JavaScript console.

const SSDP_ADDRESS = '239.255.255.250';
const SSDP_PORT = 1900;
const SSDP_REQUEST_PAYLOAD =    "M-SEARCH * HTTP/1.1\r\n"+
                                "HOST: 239.255.255.250:1900\r\n"+
                                "MAN: \"ssdp:discover\"\r\n"+
                                "MX: 3\r\n"+
                                "ST: ssdp:all\r\n"+
                                "USER-AGENT: Joel's SSDP Implementation\r\n\r\n";

var searchSocket = null;

function beginSSDPDiscovery() { 
    if (searchSocket)
        return;
    $('.responseList').empty();
    searchSocket = new MulticastSocket({address:SSDP_ADDRESS, port:SSDP_PORT});
    searchSocket.onDiagram = function(arrayBuffer, remote_address, remote_port) {
        console.log('response from ', remote_address, " ", remote_port);
        var msg = searchSocket.arrayBufferToString8(arrayBuffer);
        console.log(msg);        
    }
    searchSocket.connect({call:function(c) {
        console.log('connect result',c);
        searchSocket.sendDiagram(SSDP_REQUEST_PAYLOAD,{call:()=>{console.log('success')}});
        setTimeout(endSSDPDiscovery, 5000);
    }});    
}

Not that parsing the response strings is difficult, by any means it would be more convenient if the response were a JSON object. I’ve made a function that will do a quick transform on the response so I can work with it like any other JSON object.

function discoveryStringToDiscoveryDictionary(str) {
    var lines = str.split('\r');
    var retVal = {}
    lines.forEach((l) => {
        var del = l.indexOf(':');
        if(del>1) {
            var key = l.substring(0,del).trim().toLowerCase();
            var value = l.substring(del+1).trim();
            retVal[key]=value;
        }
    });
    return retVal;
}    

After going through this transformation a Roku Streaming Media Player on my network returned the following response. (I’ve altered the serial number)

{
    cache-control: "max-age=3600",
    device-group.roku.com: "D1E000C778BFF26AD000",
    ext: "",
    location: "http://192.168.1.163:8060/",
    server: "Roku UPnP/1.0 Roku/9.0.0",
    st: "roku:ecp",
    usn: "uuid:roku:ecp:1XX000000000",
    wakeup: "MAC=08:05:81:17:9d:6d;Timeout=10"    ,
}

Enough code has been shared for the sample to be used, but rather than rely on the development JavaScript console,  I’ll change the sample to show the responses in the UI. To keep it simple I’ve defined the HTML structure that I will use for each result as a child element of a div element of the class palette. This element is hidden, but for each response I’ll clone the div element of the class ssdpDevice; will change some of the child members; and append it to a visible section of the page.

        
 <html>
    <head>
        <link rel="stylesheet" href="styles/style.css" />
        http://./scripts/jquery-3.3.1.min.js
        http://./scripts/MulticastSocket.js
        http://./scripts/app.js
    </head>
    <body>
Scan Network

 

</div>

address:
location:
server:
search target:

</div> </div>

</body> </html>

 

The altered function for that will now display the SSDP responses in the HTML is the following.

        function beginSSDPDiscovery() { 
            if (searchSocket)
                return;
            $('.responseList').empty();
            searchSocket = new MulticastSocket({address:SSDP_ADDRESS, port:SSDP_PORT});
            searchSocket.onDiagram = function(arrayBuffer, remote_address, remote_port) {
                console.log('response from ', remote_address, " ", remote_port);
                var msg = searchSocket.arrayBufferToString8(arrayBuffer);
                console.log(msg);
                discoveryData = discoveryStringToDiscoveryDictionary(msg);
                console.log(discoveryData);
        
                var template = $('.palette').find('.ssdpDevice').clone();
                $(template).find('.ipAddress').text(remote_address);
                $(template).find('.location').text(discoveryData.location);
                $(template).find('.server').text(discoveryData.server);
                $(template).find('.searchTarget').text(discoveryData.st)
                $('.responseList').append(template);
            }
            searchSocket.connect({call:function(c) {
                console.log('connect result',c);
                searchSocket.sendDiagram(SSDP_REQUEST_PAYLOAD,{call:()=>{console.log('success')}});
                setTimeout(endSSDPDiscovery, 5000);
            }});    
        }    

Working with non-SSL Web Services within an SSL page

I was making a Progressive Web App (PWA) and encountered a problem pretty quickly.  PWAs need to be served over SSL/HTTPS.  The services that they access must also be served over SSL (a page served over SSL cannot access non-SSL resources).  Additionally, since my app is being served from a different domain, there must be a Cross Origin Resource Sharing header permitting the application to use the data.  My problem is that I ran into a situation where I needed to access a resource that met neither of these requirements.

Failed to load http://myUrl.com: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://SomeOtherURL.com' is therefore not allowed access.

The solution to this seemed obvious: a proxy service that would consume the non-SSL feed and make the results available over HTTPS.  There exists some third party services that can do this for you (My SSL Proxy, for example).  But the services that I found were not meant for applications and generally don’t add the required CORS headers.  Implementing something like this isn’t hard, but for a lightweight application for which I wasn’t planning on making any immediate revenue, I wanted to minimize my hosting costs.  This is where two services that Google provides come into play.

The first Google service is Firebase.

Firebase (available at https://Firebase.Google.com) allows you to host static assets in the Google cloud.  These assets are servers over SSL.  This was a perfect place for hosting most of the source code that was going to run on the mobile device.

As for the service proxy, I made a proxy service that ran on the second Google service: App Engine.  Google’s cloud service App Engine (available at https://cloud.google.com/appengine/) allowed me to write my proxy service using NodeJS (available at https://nodejs.org/).  I had it query the data I needed from the non-SSL service and cache the data for 30 seconds at a time.  All of Google’s services use SSL by default, so I didn’t have to do anything special.  When returning the response I added a few headers to handle CORS requirements.  Here’s the code for the node server.  If you use it, you will need to modify it so that any parameters that you need to pass to the non-SSL service are passed through.

const http = require('http')
const port = 80;
const MAX_SCHEDULE_AGE = 30;
const SERVICE_URL=`YOUR_SERVICE_URL`

var schedule = '[]';
var lastUpdate = new Date(1,1,1);


function timeDifference(a,b) { 
    var c = (b.getTime() - a.getTime())/1000;
    return c;
}

function sendSchedule(resp) {
    resp.setHeader('Access-Control-Allow-Origin', '*');
	resp.setHeader('Access-Control-Request-Method', '*');
	resp.setHeader('Access-Control-Allow-Methods', 'OPTIONS, GET');
	resp.setHeader('Access-Control-Allow-Headers', '*');
    resp.end(schedule);
}
const requestHandler = (request,response) => {
    var now = new Date();
    var diff = timeDifference(lastUpdate, now);
    if(diff>MAX_SCHEDULE_AGE) {
        console.log('schedule is stale. updating');
        sendSchedule(response);
        return;
    }
    updateSchedule((d)=> {
        console.log('schedule updated')
        sendSchedule(response);
    });
    console.log(request.url);

}

const server = http.createServer(requestHandler);

const https = require('https');


function updateSchedule(onUpdate) { 
    https.get(SERVICE_URL, (resp) => {
        let data = '';
        resp.on('data', (chunk) => {
            data += chunk;
        });
        resp.on('end', ()=> {
            schedule = data;
            lastUpdate = new Date();
            if(onUpdate) {
                onUpdate(schedule);
            }
        })
    });
}

server.listen(port, (err) => {

    if(err) {
        return console.log('something bad happened');
    }
    console.log(`server is listening on port ${port}`);
    updateSchedule();
}) 

One of the other advantages of having this proxy service is that there is a now a layer for hiding any additional information that is necessary for accessing the service of interest.  For example, if you are communicating with a service that requires some key or app id for access, that information would never flow through to the client.

Some configuration was necessary for deployment, but not much.  I had to add a simple app.yaml file to the project.  These are the contents.

# [START runtime]
runtime: nodejs10
# [END runtime]

Deployment of the application was unexpectedly easy.  I already had the source code stored in a git repository.  App Engine exposes a Linux terminal through the browser.  I cloned my repository and typed a few commands.

$  export PORT=8080 && npm install
$  gcloud app create
$  gcloud app deploy

After answering YES to a configuration prompt, the application was deployed and running.

One might wonder why I have the code for my application hosted in two different services.  I could have placed the entire thing in App Engine.  My motivation for separating them is that I plan to have some other applications interface with the same service.  So I wanted to keep the code (for specific clients of the code) separate from the service interface.

Augmented Reality with Samsung XR SDK

Samsung showed the XR SDK at the 2018 Developers Conference. While Microsoft has generally presented their reality technologies as being along a spectrum (ranging from completely enveloping the user to only placing overlays on the real world) it has always been something that has involved a head mounted device. Samsung presents AR as something that is either viewed through a head mounted device or something that a person views through a portable hand held window into another world.  The language used by various companies varies a bit. Microsoft calls the their range of technologies “mixed reality.” Samsung calls theirs SXR which stands for Samsung Extended Reality.

It was several years ago that Samsung first showed it’s take on VR with the release of the Note 4 and the developer’s edition GearVR. The GearVR is now available as a consumer product, but Samsung took an economical approach to initial hardware for head mounted augmented reality. Instead of creating custom hardware they took some off the shelf products and mixed them together to make an economical headset.

Samsung AR Headset
Experimental AR Headset using off the shelf parts
Part Description Cost Source
AR Headset 90° FOV “Drop-In” phones 4.5 inches to 5.5 inches, 180g 65.99 USD
External Camera ELP VGA USB camera module with 100° FOV lens 24.69 USD
OTG connector Wavlink USB 3.1 Type C Male to USB 3.0 Type A Female OTG Data Connector Cable Adapter 5.99 USD Amazon
Total Cost 95.USD

The Samsung XR SDK is almost a super set of the the GearVR SDK. I say “almost” because with a proper super set you would find all the same class names that you would expect from the GearVR SDK. In the Samsung XR SDK the classes exists within a new namespace and have been renamed. GearVR programs could be ported over with some changes to the class names being invoked.

In development is an API standard for AR/VR experiences named OpenXR. Once the standard is defined and released Samsung plans for their XR SDK to be an implementation of this standard.

While the GearVR SDK was specifically for Samsung devices and the Samsung headset the Samsung XR SDK will run on non-Samsung devices for through-the-window AR but will run on the Oculus GO and Samsung devices for stereoscopic experiences.

 

Linux On Dex: Works on WiFi Tab S4 Models Only

Update 2018-Dec-11: I’ve spoken to a LoD team member and to jump straight to the point of you have a LTE Tab S4 then simply put the required update isn’t available at this time and there is no information on when it will be available.

Some people trying to install Linux on Dex are running into an obstacle. After installing he app and trying to run it they get the following error message.

Linux on Dex requires your device to have the latest software o support some features.

After this message is acknowledge the application closes. If someone with this error checks for updates in the app store or for updates to the operating system they get notification that everything is up to date. What’s going on? I contacted LoD support about this and got back the following response.

Currently, the Linux on DeX(beta) requires latest SW for Galaxy Note9 and Galaxy Tab S4. SW update schedule may vary depends on the region and carrier.

Currently, the Linux on DeX(beta) requires latest SW for Galaxy Note9 and Galaxy Tab S4.
SW update schedule may vary depends on the region and carrier.

What does this mean? It means that your device doesn’t have a update that is required for DeX and that your carrier might not have released it.  Devices sold through a carrier can be a bit slower in receiving their updates. Samsung hasn’t been specific on the updated needed.  I’ve communicated with someone on the Linux on Dex team and was told that LTE tablets in general do not have the update that is required for Linux on Dex. Additionally the person told me that there is no information available on when particular updates will work their way through certain carriers.

BTW: Unlocking your device and installing a SIM from another carrier will not change this; this behaviour is dependent on the carrier for which the device was made, not on the SIM that happens to be in the device at the time.

Samsung Announced Exynos 9 with NPU

 

Consistent with what they said at the developer’s conference about wanting to extend the reach of their A.I. Samsung has announced a new System on Chip (SoC) with some A.I. related features. The Exynos 9 Series 9820 processor. The processor contains an NPU, a unit for processing neural networks at speeds faster than what could be done with a general purpose processor alone. The presence of this unit on the device hardware makes possible device side experiences that would have previously required that data be sent to a server for processing. This may also translate into improvements in AR and VR experiences.

The NPU isn’t the only upgrade that comes with the processor. Samsung says the 9820’s new fourth generation custom core delivers a 20% improvement in single core performance or 40% in power efficiency compared to is predecessor. Multicore performance is said to be increased around 15%. The Exynos 9820 also has a video encoder capable of decoding 4K video at up to 150 frames per second in 10-bit color. The processor goes into mass production at the end of this year.

Source: Samsung

Bixby Developer Studio

Samsung says they would like to have AI implemented in all of their products by 2020. From the visual display shown during the SDC 2018 conference it appears their usage of “all” is intended to be widely encompassing. Phones, car audio systems, refrigerators, air conditioners…

Samsung is inviting developers to start engaging in development for their conversational AI. Now they have made the same tools that they use for Bixby development internally available publically. The development portal and the development tools for Windows and OS X are available now at: https://bixbydevelopers.com

Galaxy Home, a Bixby enabled smart speaker, was showcased as a target implementation for the SDK.

The “Media Control API” will be available to content partners this December for adding deeper control into applications. Samsung says Netflix and Hulu are on board and will begin development with it next year.

The Samsung Frame TVs are also being opened to developers by way of the Ambient Mode SDK. This will allow developer content to show when the TV is in it’s standby mode.