Enabling Development Mode on Samsung Tizen TVs

The modern Samsung TVs run the Tizen operating system. You can develop for these just as you might develop for the Tizen based watches. The Tizen TVs are locked down more than the watch is.  To deploy to a Tizen TV you’ll need to both enable developer mode and will have to let the TV know from what address it will be receiving code. If it receives request from other addresses it won’t respond to them.

On the consumer displays there is no obvious way to enable developer mode. The option is hidden. If you open the apps browser (for seeing what other apps there are to install) you can open the developer mode menu by entering “12345” on the remote. A popup window will show from which you can select to turn developer mode “On.” If you are using one of the commercial displays (SSSP, or Samsung Smart Signage Platform) the method to enable developer mode is more obvious. If you open the TV’s menu there is an option called URL Launcher Settings. The developer mode option is within these settings.

On the consumer devices you’ll also be asked to enter the IP address of the machine from which the development will occur. This prevents other rouge devices on your network from doing anything to the TV.  Here you should enter the IP address of your development machine.

After these options are set the TV needs to be rebooted before the changes are fully applied. you can do this by holding the power button on the consumer TVs for two seconds, holding the power off button on a SSSP display for 2 seconds, or removing the power source from the TV and reapplying it.

After the TV boots developer mode is now enabled. However the mode being enabled doesn’t mean that all of the conditions for deploying code have been met. You will need to generate a distributor certificate also. Samsung has this page with instructions for generating a certificate. In following these directions you will need the the Device Unique ID (DUID). To get this you first need to connect to the TV. I prefer to use the sdb utility that comes with the Tizen SDK. It is located in tizen-studio/tools (adjust this path according to the location at which you installed Tizen Studio). The syntax for connecting is:

sdb connect

Sometimes I have to type the command twice before it takes effect. After the connection is successful open the Tizen Device Manager. You should see the TV connection within the UI. If you right-click on the connection you will have the option of selecting the TV’s DUID. Select this option and copy the DUID to the system clipboard. Keep the DUID on the system clipboard and when it is needed during the certificate generation it will automatically be pasted where it is needed.

If you at some point find that you need the TV extensions, don’t have them installed, and don’t see them in the the package manager you can install them using these instructions. https://developer.samsung.com/tv/develop/tools/tv-extension/download/

Creating a certificate based on the Device Uniuque ID (DUID) is slightly different for the two classes of displays. For the consumer displays a Samsung certificate should be created. For the commercial displays a Tizen certificate should be created. It can be a little confusing with Tizen being a Samsung creation. But you may be able to make better sense of it from another perspective. The Samsung certificate is associated with the Samsung App store. The consumer displays access the app store and the certificate rules for that are different than for apps that have no access to the App Store.

samsungremote

samsungtv

Bixby Studio Available for all Bixby Compatible Devices @SDC19

bixby

Samsung announced today at the annual Developer Conference that Bixby Studio, their developer tool for building natural language interactions, is available on all devices that support Bixby. Previously this functionality was only available on the mobile devices. With today’s announcement it is available on other devices such as the TV, Tizen powered refrigerators, and the watch.

To encourage developers to get started with Bixby development they’ve also opened a contest offering thousands of dollars in prizes. For more information on the contest visit BixbyDevJam.com.

Consumer v Commercial Displays

There are two mistakes that one might make about the difference in consumer and commercial displays.

Mistake 1: Commercial Display are just Consumer Displays that cost More

This is an easy mistake to make because at first glance the displays may look alike. But commercial displays are made to withstand a wider range of conditions than their consumer counterparts. An illustration of this that comes to my mine is a display I worked on that was installed in an airport. When the display opened to the public we saw some abuses that we didn’t quite imagine. The installation included touch screens. We expected people to touch the screens. We didn’t expect people to set their children on top of the displays. Yes, this really happened. The displays survived the years that they were at the installation without problems, but I still consider some of what they endured to be borderline abusive. If a small child were set on a consumer display (do not do this) I’m pretty sure that it wouldn’t last long.

That is just one of the tolerances that a commercial display may have that it’s consumer counter part does not. The commercial displays may also have higher tolerance for moisture (perhaps even outdoor use), temperature, potentially higher potentially a brighter screen (as might be needed for outdoor use).

Commercial displays may have a number of features that the consumer counter parts do not.  These may be additional connections (such as RS232), the ability to control several displays at once (as one might want to do in a video array configuration) and even internal media players or security features.

Mistake 2: A Commercial Display would make a good Home Display

This misconception comes from the idea that a commercial display is a consumer display with features added. The reality is that while the commercial displays may have additional features they might also be missing features that the consumer displays have. If you buy a typical consumer display above a certain size it will have the ability to run several consumer oriented applications such as a Netflix and Hulu player and a few others. The commercial displays don’t have this; and that is understandable since they are not for engaging in these consumer activities. A person that pays the extra money to get a commercial display may leave one feeling quite disappointed after realizing the features that are not available.

Samsung Consumer Displays v Samsung Commercial Displays

I’m looking at to displays that were made at about the same time. Both are made by Samsung; one is a consumer display and the other is a commercial display. Getting the differences between them has required my own exploration and experimentation. Samsung has a site at https://samsungDforum.com that contains information about the consumer displays. Unfortunately this information is only available to those that sign up for the Samsung Partner program. From what I’ve read about this program an NDA is required to enroll within it. I have not signed up for this program; if I did then I wouldn’t be able to talk about the information gained within it. As part of my interest in the displays is to talk about them (on this blog) I’m instead am gathering information both from experimenting with the display and through scraps of information available on the Internet.

The process of experimentation has had it’s moments of frustration, and I’ve already written some material on my experiences that are to be posted in the future.  In my next post on this topic I’ll talk about the differences in the Samsung Consumer and Commercial displays.

 

Getting Ready for the Holiday Season with Phillips Hue and a Raspberry Pi

The holiday season is upon us; by the end of this month I expect to start seeing my neighbors put out their fall decorations. By mid-October decorations for Halloween will show up. After Halloween the decorations roll back to fall themed only and then are changed to Christmas decorations right after new years. Two of these holidays tend to come with flashy displays and lights: Halloween and Christmas.

porch

I primarily use Phillips Hue lighting throughout my house and it is a perfect companion for festive displays. The color bulbs are adaptable to any color scheme and the newly released Edison-style bulbs add a warm glow to fall scenes.  The Phillips Hue lighting sets are programmable if you are using a hub. While the new light bulbs have Bluetooth support to directly be controlled by a phone there’s not public API for them (yet). For programming a hub is needed.

pumpkin

I’ve written on controlling the Phillips Hue lights before. Expanding on that I wanted to make a project that would let an IoT device trigger a scene according to some external event. I’ll use a motion sensor to trigger the relevant events.

 

But you could also use sound, change in temperature, lighting, or time as sources. I’ll be using a Raspberry Pi; it has network connectivity and can be easily interfaced to a number of devices.  I’m using the Raspberry Pi zero but about any Pi will do. Hue does have available a motion sensor ; if one only wishes to control lights based on motion a solution is available. But if one wishes to have other triggers or trigger other actions along with the lights a custom solution is needed.

20191005_160504.jpg

The Raspberry Pi 4 with a heat sink attached.

20191005_161532.jpg

Raspberry Pi Zero with a 4-port USB hub

All that I want to happen is for the the lighting pattern to change when a person is detected. I’ll use a passive infrared sensor for presence detection.  For Halloween I want a Hue light that is illuminating a jack-o-lantern to pulsate an orange color. When someone comes up knock on the door I want the light for the front door to go bright white. A few moments after a person is no longer there I want the system to go back to it’s previous pattern. But past a certain hour I don’t want this to continue; after 10:00pm the lights should extinguish. Simple enough, right?

 

20191005_155936.jpg

This is the passive infrared sensor that I used.

The physical build for this circuit is easy. The Passive Infrared Sensor (PIR) will get power from the VCC and ground pins of the Raspberry Pi. The signal line from the PIR can be connected to any of the GPIO pins. I’m going to use pin 3. The circuit will need to be put in an enclosure to protect it from rain or humidity in general. If your enclosure doesn’t already have a weather protected way to get power in your options are to either run the Pi off of a battery that is within the enclosure  (that means periodic recharging) or drill a hole for the wires yourself and apply a sealant.

There are a lot of languages that I could use for writing my program on the Pi. Python, Java, and C/C++ make the top of the list (in no specific order). For this project I’ve decided to go with Java. To interact with the pins in Java we will need to import classes from com.pi4j.io and com.pi4j.wiringpi. These are not standard libraries; they exists to provide an interface to the pins. To demonstrate reading a pin in Java here is a simple program that will print text in a look that reflects the pin state.

import com.pi4j.io.*;
import com.pi4j.wiringpi.Gpio;
import com.pi4j.wiringpi.GpioUtil;

public class PinTest {
   public static void main(String args[]) throws InterruptedException {   
      final GpioController gpio = GpioFactory.getInstance();
      Gpio.pinMode (3, Gpio.INPUT) ;          
      while(true) {
         if (Gpio.digitalRead(3) == 0){
               System.out.println(The Pin is ON");
         }else{
            System.out.println("The Pin is OFF");
         }
      }
   }
}

Phillips has an SDK for Java. You might see it present as an SDK for Android, but it works fine in other Java environments. A convenience from this is that a significant portion of the development can be done on your computer of choice. I did most of the development on a Mac and used git to transfer it to the Raspberry Pi when done.

20191005_162433.jpg

The color Hue lighting can take on a variety of colors.

The overall execution loop of the program will check whether or not the trigger condition has occurred. If the trigger condition has occurred then the program will activate a scene. If not then it deactivates the scene. The program loop also contains some debouncing logic. Depending on the type of sensor used and the sensors characteristics a sensor could change states with ever cycle. I’ve chosen to only deactivate if a certain amount of time has passed since the last activation. For initial development instead of interfacing to an actual sensor I have a method that is returning a random Boolean value. When the code is moved to the Raspberry Pi this method will be updated to read the state of the actual sensor. The following will only deactivate after there have been 2 seconds with no activation event.

    boolean getActivationState() { 
        return random.nextBoolean();
    }

    void runLoop() throws InterruptedException{ 
        System.out.println("running");
        long lastActivation = System.currentTimeMillis();
        while(true) { 
            Thread.sleep(100);
            boolean isActivated = getActivationState();
            if(isActivated) {
                lastActivation = System.currentTimeMillis();
                activateScene();
            }
            else {
                long now = System.currentTimeMillis();
                if ((now - lastActivation)> 2000)
                    deactivateScene();
            }
        }
    }

Controlling the lights happens through the Hue SDK. Before activating the lights the Hue bridge must be discovered. While Hue makes a series of lights that have Bluetooth controllers built in and can be controlled without the Hue Bridge currently they only support APIs through the bridge. It is a required hardware component.

The SDK already contains functions for discovering the bridge. All that a developer needs to do is initiate a search and implement a callback object that will receive information on the bridges discovered. In the following I instantiate the Phillips Hue SDK object and register a listener.  If the program had been connected with a bridge before the IP address if that bridge is loaded and it reconnects to it. Otherwise the search is initiated. As the search occurs the earlier registered listener receives callbacks.

private void init() {
    this.loadSettings();
    System.out.println("Getting SDK instance");
    phHueSDK = PHHueSDK.create();
    System.out.println("Setting App Name");
    phHueSDK.setAppName("HolidayLights");
    phHueSDK.setDeviceName("RaspPi");
    System.out.println("SDK initialized");
    phHueSDK.getNotificationManager().registerSDKListener(listener);

    if(this.getLastIpAddress()  != null) {
        System.out.println("Connect to last access point");
        PHAccessPoint lastAccessPoint = new PHAccessPoint();
        lastAccessPoint.setIpAddress(getLastIpAddress());
        lastAccessPoint.setUsername(getUserName());
        if (!phHueSDK.isAccessPointConnected(lastAccessPoint)) {
            phHueSDK.connect(lastAccessPoint);
        }
    } else {
        System.out.println("Searching for access point");
        PHBridgeSearchManager sm = (PHBridgeSearchManager) phHueSDK.getSDKService(PHHueSDK.SEARCH_BRIDGE);
        // Start the UPNP Searching of local bridges.
        sm.search(true, true);
    }
}

The listener is of type PHSDKListener. I won’t show the full implementation here but will show some of the more relevant parts.

When the bridges are found they are returned as a list. I’ve only got one on my home network and so I connect to the first one seen. If you have more than one bridge you’ll need to implement your own logic for making a selection.

@Override
public void onAccessPointsFound(List accessPoint) {
    System.out.println("Access point found");
    if (accessPoint != null && accessPoint.size() > 0) {
        System.out.println("Number of access points: "+new Integer(accessPoint.size()).toString());
        phHueSDK.getAccessPointsFound().clear();
        phHueSDK.getAccessPointsFound().addAll(accessPoint);      
        phHueSDK.connect(accessPoint.get(0));       
    }
}

When the connect attempt is made it is necessary to press the pairing button on the bridge. The console will print a message from the SDK saying this.  Once the bridge is connected I save an instance of the bridge and the a

 

 

        @Override
        public void onBridgeConnected(PHBridge b, String username) {
            HolidayController.this.bridge = b;
            isBridgeConnected = true;
            System.out.println("on bridge connected...");
            phHueSDK.setSelectedBridge(b);
            phHueSDK.enableHeartbeat(b, PHHueSDK.HB_INTERVAL);
            phHueSDK.getLastHeartbeat().put(b.getResourceCache().getBridgeConfiguration() .getIpAddress(), System.currentTimeMillis());
            setLastIpAddress(b.getResourceCache().getBridgeConfiguration().getIpAddress());
            setUserName(username);
        }

After the bridge connects the SDK will query the state of the lights on the system and update some objects representing the last known state of each light. The first time the cache is updated the program prints the name of each light and the light’s identity. This information is useful for selecting which lights will be controlled.  The light list is saved for the program to use.

        @Override
        public void onCacheUpdated(List<Integer> arg0, PHBridge bridge) {
            if(!isDeviceListPrinted) {
                PHBridgeResourcesCache rc = bridge.getResourceCache();
                List<PHLight> lightList = rc.getAllLights();
                HolidayController.this.lightList = lightList;
                ListIterator<PHLight> it = lightList.listIterator();
                while(it.hasNext()) {
                    PHLight l = it.next();
                    System.out.println(l.getIdentifier() + "    " + l.getName());
                }
                isDeviceListPrinted = true;
            }
        }
With that in place we now have enough information to change the state of the lights. To test things out I started with implementations of activateScene and deactivateScene that will just turn all the Hue lights on and off (don’t do this if you have other people in your dwelling that this would affect).
void activateScene() {
    ListIterator<PHLight> it = lightList.listIterator();
    while(it.hasNext()) {
        PHLight l = it.next();
        System.out.println(l.getIdentifier() + "    " + l.getName());
        PHLightState state = l.getLastKnownLightState();
        state.setOn(true);
        state.setBrightness(254);
        float[] xy = PHUtilities.calculateXYFromRGB(
            0xFF & ((int)color>> 0x10), 
            0xFF & ((int)color >> (long)0x08), 
            0xFF & (int)color, l.getModelNumber());
        l.setLastKnownLightState(state);
    
        bridge.updateLightState(l.getIdentifier(), state,  NOPListener);
    }
    isDeviceListPrinted = true;
}

void deactivateScene() {
    ListIterator<PHLight> it = lightList.listIterator();
    while(it.hasNext()) {
        PHLight l = it.next();
        System.out.println(l.getIdentifier() + "    " + l.getName());
        PHLightState state = l.getLastKnownLightState();
        state.setOn(false);
        //state.setBrightness(254);
        l.setLastKnownLightState(state);
    
        this.bridge.updateLightState(l.getIdentifier(), state,  NOPListener);
    }
    isDeviceListPrinted = true;
}
If the program is run at this point the lights will turn on and off somewhat randomly. Ultimately we don’t want it to control all the lights. Instead I want to be able to specify the lights that it is going to control. I’ve made a JSON file file that contains a couple of elements. One is the RGB color that I want to use in the form of an integer, the other is an array of numbers where each number is an ID for the light to be controlled. The RGB color is specified here as a base 10 number instead of the normal base 16 that you may see used for RGB codes. Unfortunately JSON doesn’t support hexadecimal numbers 🙁.
{
    "lights":[5, 7, 9],
    "color": 16711935
}
These values are read by the code. Before the code acts on any light it checks to see if its identifier is in this array before continuing. During activation if the identifier is in the array the light’s state is set to on, brightness is set to full, and the color is applied. The color must be converted to the right color space before being applied to the light; something that is done with a utility function that the SDK provides.
void activateScene() {
    System.out.println("activating scene");
    ListIterator<PHLight> it = lightList.listIterator();
    while(it.hasNext()) {
        PHLight l = it.next();
        if(isTargetLight(l.getIdentifier())) {
            System.out.println(l.getIdentifier() + "    " + l.getName());
            PHLightState state = l.getLastKnownLightState();
            state.setOn(true);
            state.setBrightness(254);
            float[] xy = PHUtilities.calculateXYFromRGB(
                0xFF & ((int)color>> 0x10), 
                0xFF & ((int)color >> (long)0x08), 
                0xFF & (int)color, l.getModelNumber()
            );
            state.setX(xy[0]);
            state.setY(xy[1]);
            l.setLastKnownLightState(state);        
            bridge.updateLightState(l.getIdentifier(), state,  NOPListener);
        }
    }
}

void deactivateScene() {
    System.out.println("deactivating");
    ListIterator<PHLight> it = lightList.listIterator();
    while(it.hasNext()) {
        PHLight l = it.next();
        if(isTargetLight(l.getIdentifier())) {
        System.out.println(l.getIdentifier() + "    " + l.getName());
        PHLightState state = l.getLastKnownLightState();
        state.setOn(false);
        l.setLastKnownLightState(state);
    
        this.bridge.updateLightState(l.getIdentifier(), state,  NOPListener);
        }
    }
}
The last steps needed to make the device work as intended are to update the getActivationState() function to read the actual state of the motion sensor instead of a random value and wiring the motion sensor to a Raspberry Pi. From hereon the code is only going to work on a Raspberry Pi since the libraries for reading the pins are only applicable to this device. It is possible to dynamically load class libraries and use them as needed for the specific platform on which code is running. But information on doing that is beyond the scope of what I wish to discuss here.
I’m declaring a GpioController variable at the class level and am instantiating it in the constructor. I also set the mode of the IO pin that I’ll be using to  input.
    GpioController gpio;
    
    HolidayController() {
        gpio = GpioFactory.getInstance();
        Gpio.pinMode (3, Gpio.INPUT) ; 
        //....
     }
The getActivationState() implementation only needs to contain a single line.
boolean getActivationState() { 
   return Gpio.digitalRead(3);
}
With that change it will now work. If the Raspberry Pi is placed in a position where the motion sensor has a view of the space of interest then it will control the lights. If you are using one of the earlier Raspberry Pis (anything before the Raspberry Pi 4) you should be able to also power the Pi off of a portable phone charger; there are many that will make sufficient batteries for the Pi. The Raspberry Pi 4 has higher energy requirements and you may run into more challenges finding a portable power supply that works.
Why use the Pi at all for this? Because there is a lot of room to expand. Such as using the video capabilities of the pi to power a display or controlling other devices. Controlling the lights is a start. I’ll be revisiting this project for add-ons in the future.
If you want to start on something similar yourself the following (affiliate) links will take you to the products on Amazon.
Parts Lists

Developing for older Samsung TVs

If you already have a Samsung TV and want to start developing for it chances are you don’t have the latest and greatest model. But when you install the Tizen development tools they only target 2 operating system versions; the latest version that is out now and the version that is yet to be released in a year or so. Your TV is too old! So what can you do?

If you check the Tizen development forums the suggestion is to install an older version of the development tools. But that’s no fun! And it is possible to develop for the older TVs with the newer tools. Go ahead and install the latest versions of the Tizen development Studio first. While that is installing you will need to download an older version of the Extensions for TV. You can find them at this site. As you scroll through the available versions you will see that if you attempt to get a version older than the 3.0 version you can’t download it. Download the 3.1 or 4.0 extensions. Don’t worry, the  extensions also contain the components needed for TV’s running the 2.3 and 2.4 Tizen version.

tizen extension for tizen sdk

After Tizen Development Studio is installed open the package manager. In the upper right corner of the package manager is a gear icon. Select it.

 

packagemaker

Expand the “Extensions SDK” area of the window to see the extensions installed and click on the + button to add an extension. A window opens asking for a URL. Leave the URL blank and click on the three dots next to it. You’ll now be asked to navigate to a local archive of the extension you with to add. Navigate to the file that you downloaded earlier and select it.  The package manager will take a few moments to install the extension.

When you attempt to create a new project and look at the TV templates available there’s only the 4.0 and 5.0 projects. What gives? The missing project templates can be found under the Custom projects. Select “TV-Samsung v3.0.” Even if you have a TV running Tizen 2.3 this opeion will work. When you click the next button you’ll see the familiar project templates.

Listing Applications on a Tizen Device

In a Tizen project I was working on I found that Tizen Web alone wasn’t enough to help me accomplish my goal. For some of the functionality that I needed a native application would be needed (more on that in another blog post). Rather than completely write the application in native code I was going to use HTML for the UI and a native service for other functionality. This is a Tizen Hybrid application.

The Tizen documentation wasn’t quite clear to me on what identifier to use when trying to launch a service packaged with an HTML application. It mentions using the App ID. This didn’t work for me. I only figured out the right name to use when I tried listing all of the applications and services on the device.

Getting a list of the applications and services is done through tizen.application.getAppsInfo. This function takes as a parameter a callback. The call back is given a list of the applications installed on the device. For my purposes I was only interested in the id member of the objects that were passed back.

  

tizen.application.getAppsInfo(
    function onListInstalledApps(applications) {
        console.log("List of Applications:");
        applications.forEach(
          function(app) {
    		console.log(`  app.id: ${app.id}`);
        });
    });

Once I saw the output of this it was easy to identify the problem I encountered with launching the service.

Screen Shot 2019-05-24 at 10.38.17 AM
Output of app listing code

According to the Tizen documentation when launching a service the ID string used is composed of the package ID and the app ID of the service. The package ID can be found in the confix.xml for the web application.  In the following you can see the package ID is “IVFd9Or08P”.

Screen Shot 2019-05-24 at 4.34.54 PM

The app ID can be found in then tizen-manifest.xml for the service project.

Screen Shot 2019-05-24 at 4.37.53 PM

The app ID here is “org.sample.service.” If you look in the output from the code sample for listing installed applications you will see that the service shows up as IVFd9Or08P.testservice. It is using the entry from the “exec” field instead of the appid field. I’m not sure why the documentation points to the appid only. But I’m happy to have figured out this problem.

 

Raspberry Pi 4 Announced

Raspberry Pi 4
Raspberry Pi 4

The fourth generation of the Raspberry Pi has been announced. Each generation of the Raspberry Pi is primarily identified by its specifications. (Not including the Raspberry Pi Compute module because it generally is not used by hobbyist). With the Raspberry Pi 4, this isn’t the case. There are three variations available. The new Raspberry Pi 4 comes with a 1.5 GHz ARM Cortex-A72 quad-core processor.  With that processor the Raspberry Pi 4 can decode 4K video at 60 FPS or two 4K videos at 30 FPS. The amount of RAM available to the unit depends on the version. The smallest amount of RAM, 1 gig, is available for $35 USD. The next size, 2 gigs, can be purchased for $45 USD. The largest unit, 4 gigs, is $55 USD.

At first glance, the unit will be recognized as a Raspberry PI but a closer look at the ports will show some immediate differences. The Pi has converted from a micro-USB port to USB-C. The full sized HDMI port is gone and has been replaced with two micro-HDMI ports. The unit can drive two displays at once.  A couple of the 4 USB ports have been upgraded to USB 3 while the other two are still USB 2. The wireless capabilities are upgraded to use USB 5.0 and dual-band 802.11ac Wi-Fi.

 

The unit is available for purchase from Raspberry Pi’s site now.  A new case for the Pi 4 and a USB-C power supply of appropriate wattage are both available through the site as well.

 

https://www.raspberrypi.org/products/raspberry-pi-4-model-b/

Raspberry Pi 4 on Amazon

 

NVIDIA Jetson Development Environment Setup

In previous posts on the NVIDIA Jetson posts I’ve talked about getting the device setup and some additional accessories that you may want to have. The OS image for the NVIDIA Jetson already contains a compiler and other development software. Technically someone can start developing with the OS image as it is when it ships.  But it is not desirable to develop this way.

There may be some things that you prefer to do on your primary computer and you’d like to be able to control the Jetson from your primary machine. The OS image for the Jetson already has SSH enabled. If you are using a Windows machine and net an SSH client I suggest using PuTTY for Windows. It’s a great SSH client and also works as a telnet or serial console when needed. It’s available from https://www.putty.org/.

When Putty is opened by default it is ready to connect to a device over SSH. You only need to enter the IP address to start the connection. Once connected enter your account name and password and you’ll have an active terminal available. For copying files over SSHFTP I use WinSCP (available from https://winscp.net/).

For development on the device I’ve chose Visual Studio Code as my IDE. Yes, it runs on ARMs too.  There are a number of guides available on how to get Visual Studio Code recompiled and installed for an ARMS system. The one that I used is available from code.headmelted.com. In a nutshell I followed two steps; I entered a super user session with the command

su -s

Then I ran the following (which downloads a script from the head melted site and runs it).

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 0CC3FD642696BFC8^C
. <( wget -O - https://code.headmelted.com/installers/apt.sh )

The script only takes a few moments to run. After it’s done you are ready to start development either directly on the board or from another machine.

To make sure that everything works let’s make our first program with the Jetson Nano. This is a “Hello World” program; it is not going to do anything substantial. Let’s also use make to compile the program. make will take care of seeing what needs to be built and issuing the necessary commands. Here it’s use is going to be trivial. But I think starting with simple use of it will give an opportunity for those that are new to it to get oriented with it. Type the following code and save it as helloworld.cu

#include 

__global__ void cuda_hello()
{
    printf("Hello World from GPU!\n");
}

using namespace std;

int main()
{
	cout << "Hello world!" << endl;
	cuda_hello<<>>();
	return 0;
}

We also need to make a new file named makefile. The following couple of lines will say that if there is no file named helloworld (or if the file is out of date based on the date stamp on helloworld.cu) the to compile it using the command /usr/local/cuda/bin/nvcc helloworld.cu -o helloworld

helloworld: helloworld.cu
   usr/local/cuda/bin/nvcc helloworld.cu -o helloworld

Note that there should be a tab on the second line, not a space.
Save this in the same folder as helloworld.cs.

Type make and press enter to build the program. If you type it again nothing will happen. That’s because make sees that the source file hasn’t changed since the executable was build.

Now type ./helloworld and see the program run.

Congratulations, you have a working build environment. Now that we can compile code it’s time to move to something less trivial. In an upcoming post I’ll talk about what CUDA is and how you can use it for calculations with high parallelism.

NVIDIA Jetson Nano Shopping List

Jetson Nano Packaging

I had made a video posted to YouTube about the Jetson Nano and the additional items that I purchased for it. This is a complete list of those items and some extras (such as memory cards of some other sizes).

 

General Items

Memory Cards

 

Items I Found Helpful

Unboxing and Setting Up the NVIDIA Jetson Nano

I pre-ordered the NVIDIA Jetson Nano and had the opportunity to have my first experiences with it this week. For those that are considering the Nano I give you the gift of my hindsight so that you can have a smoother experience when you get started. My experience wasn’t bad by any measure. But there were some accessories that I would have ordered at the same time as the Jetson so that I would have everything that I needed at the start. I’ve also made a YouTube video covering this same information. You can view it here.

How does the Nano Compare to Other Jetson Devices?

The Jetson line of devices from NVIDIA can be compared across several dimensions. But where the Jetson Nano stands out is the price. It is priced at about 100 USD making it affordable to hobbiest. Compare this to the Jetson TK1 which is available for about 500 USD or the Jetson Xaviar available for about 1,200 USD. Another dimension of interest is the number of CUDA cores that the units have. CUDA cores are hardware units used for parallel execution.

  • TK1 – 192 CUDA Cores
  • TK2 – 256 CUDA cores
  • Nano – 128 CUDA Cores
  • Xavier – 512 CUDA Cores

In addition to the cores the other Jetson kits have support for other interfaces, such as SATA for adding hard drives or a CAN bus for interfacing with automotive systems. For someone getting started with experimentation the Jetson Nano is a good start.

What is In the Box?

Not much. You’ll find the unit, a small paper with the URL of the getting started page, and a cardboard cutout used for supporting the card on the case.

What Else Do I Need

  • SD Card
  • Power Supply
  • Keyboard
  • Mouse
  • Monitor (HDMI or Display Port)
  • 40mmx40mm Cooling Fan (optional)
  • WebCam (optional)
  • WiFi adapter or Ethernet cable to router

Most of the things on that list you might already have. For an SD card get one at least 8 gigs or larger.

Power Supply

A power supply! It uses a 5 volt power supply like what is used in a phone. Well, kind of. Don’t expect for any of your 5V power supplies to work. I found the hard way that many power supplies don’t deliver the amount of current that is needed. Even if the power supply is capable a USB cable might not allow the needed amount of current to pass. If this happens the device will just cut off. There’s no warning, no error message, nothing. It just cuts off. I only came to realize what was going on after I used a USB power meter on the device. I used a power meter for USB-A, but the board already has contacts for using a USB-C port. Depending on when you get your board it may have a USB-C port on it (possibly, speculatively).

Web Cam

A Raspberry Pi camera will work. But I used a Microsoft LifeCam. There are a number of off-the-shelf webcams that work. You’ll only need a camera if you plan on performing visual processing. If your going to be processing something non-visual or if your visual data is coming from a stream (file, network location) then of course this won’t be necessary.

WiFi

You have two options for WiFi. One option is a USB WiFi dongle. There are a number of them that are compatible with Linux that will also work here. I am using  the Edimax EW-7811UN. After being connected to one of the USB ports it just works. Another solution is to install a WiFi card into the M.2 adapter. It might not be apparent at first, but there is a M.2 slot on the case. I chose to use this solution. Like the USB solution there’s not much to be done here; inserting the WiFi adapter into the slot, securing it is most of the work. Note that you’ll also need to connect antennas to the wireless card.

Operating System Image

The instructions for writing a new operating system image are almost identical to that of a Raspberry Pi. The difference is the URL from which the OS image is downloaded. Otherwise you download an image, write it to an SD card, and insert it into the Nano. Everything else will be done on first boot. You’ll want to have a keyboard connected to the device so that you can respond to prompts. When everything is done you’ll have an ARMs build of Ubuntu installed.

For writing the OS image I used balenaEtcher. It is available for OS X and Linux. The usage is simple; select an OS image, select a target drive/memory device, and then let it start writing to the card.  The process takes a few minutes. But once it is done put the SD card in the Jetson Nano’s memory card slot.

Case Options

A case may be one of the last things that you need. But if you seriously have interest in having the Jetson Nano I suggest ordering the case at the start. There are no off-the-shelf cases available for purchase for the Nano. But there are a few 3D printable plans for the Jetson Nano. I’ve come across three and have settled on one.

First Place: Nano Mesh

NanoMesh case Image
NanoMesh 3D Printable Case

The case is a bit thick, but it isn’t lacking for ventilation. The case height accommodates a fan. While the design doesn’t include any holes for mounting antennas for WiFi drilling them is easy enough.

Second Place: Nano Box

NanoBox
Nanobox Case for nVidia Jetson Nano

The NanoBox will envelope the Jetson leaving the heat sink almost flush with the case. I’d suggest this one if you plan don’t plan to use a fan on the Jetson.  If you ever change your mind and decide that you want to have a fan it can be added. But it will be on the outside of the case.

Third Place: Nano-Pac

Nano-Pac 3D printable case
Nano-Pac case

There’s not much to say about this case. It fully envelopes the Jetson Nano. But I’ve got questions about the cooling effectiveness of this case.

It’s Assembled and Boots Up. Now What?

Once the Jetson is up and running the next thing to do is to setup a development environment. There is a lot of overlap between targeting the Jetson series and targeting a PC that has an NVIDIA GPU. What I write on this will be applicable to either except for when I state otherwise.

 

Tip: Installing CUDA SDK on Visual Studio 2019

If you try to install the nVidia CUDA SDK and plan to use Visual Studio 2019 there’s an additional manual step that you’ll need to take. The installer available for the current version of CUDA (10.1) doesn’t specifically target the recently released Visual Studio 2019, but it will mostly work with it. I say “mostly” because after installing it you’ll find that the CUDA related project templates are missing and you can’t open the sample projects.

Fixing this is as simple as copying a few files.  Copying everything from the following folder

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\extras\visual_studio_integration\MSBuildExtensions

Place it into this folder

C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\BuildCustomizations

You may have to reply to administrative prompts. But once those files are copied you should have access to the project templates and the samples.

Augmented Reality with Samsung XR SDK

Samsung showed the XR SDK at the 2018 Developers Conference. While Microsoft has generally presented their reality technologies as being along a spectrum (ranging from completely enveloping the user to only placing overlays on the real world) it has always been something that has involved a head mounted device. Samsung presents AR as something that is either viewed through a head mounted device or something that a person views through a portable hand held window into another world.  The language used by various companies varies a bit. Microsoft calls the their range of technologies “mixed reality.” Samsung calls theirs SXR which stands for Samsung Extended Reality.

It was several years ago that Samsung first showed it’s take on VR with the release of the Note 4 and the developer’s edition GearVR. The GearVR is now available as a consumer product, but Samsung took an economical approach to initial hardware for head mounted augmented reality. Instead of creating custom hardware they took some off the shelf products and mixed them together to make an economical headset.

Samsung AR Headset
Experimental AR Headset using off the shelf parts
Part Description Cost Source
AR Headset 90° FOV “Drop-In” phones 4.5 inches to 5.5 inches, 180g 65.99 USD
External Camera ELP VGA USB camera module with 100° FOV lens 24.69 USD
OTG connector Wavlink USB 3.1 Type C Male to USB 3.0 Type A Female OTG Data Connector Cable Adapter 5.99 USD Amazon
Total Cost 95.USD

The Samsung XR SDK is almost a super set of the the GearVR SDK. I say “almost” because with a proper super set you would find all the same class names that you would expect from the GearVR SDK. In the Samsung XR SDK the classes exists within a new namespace and have been renamed. GearVR programs could be ported over with some changes to the class names being invoked.

In development is an API standard for AR/VR experiences named OpenXR. Once the standard is defined and released Samsung plans for their XR SDK to be an implementation of this standard.

While the GearVR SDK was specifically for Samsung devices and the Samsung headset the Samsung XR SDK will run on non-Samsung devices for through-the-window AR but will run on the Oculus GO and Samsung devices for stereoscopic experiences.

 

Mango Beta 2 Available for Phones Today!

The Beta 2 Mango Windows Phone Tools are available to developers today! Included with the beta is the ability for developers registered with the AppHub to flash their retail devices.

I know there are some non-developers out there that want to also flash their phones and they may wonder how they get get their phones reflashed with the Mango beta. For the time being they cannot. There is an inherent risk in reflashing the phone; you could end up with a bricked phone if something goes bad. If this happens Microsoft has budgeted to take care of repairing up to one phone per developer. But Microsoft doesn’t see this risk as being appropriate for user audiences. [Some] developers on the other hand are willing to risk their device’s life and limb to have early access to something new. If you brick your device today Microsoft won’t be prepared to act on it for another couple of weeks. That’s not the best case scenario. But the alternative was to wait another couple of weeks before releasing the Mango tools. If you don’t feel safe walking the tight rope without a safety net then don’t re-flash your device yet.

According to the Windows Phone Developer site if you are a registered developer you will receive an e-mail inviting you to participate in early access to Mango.

Changing the Pitch of a Sound

I got a tweet earlier today from some one asking me how to change the pitch of a wave file. The person asking was aware that SoundEffectInstance has a setting to alter pitch but it wasn’t sufficient for his needs. He needed to be able to save the modified WAV to a file. It’s something that is easy to do. So I made a quick example

Video Example

I used a technique that comes close to matching linear interpolation. It get’s the job done but isn’t the best technique because of the opportunity for certain types of distortion to introduced. Methods with less distortion are available at the cost of potentially more CPU cycles. For the example I made no matter what the original sample rate was I am playing back at 44KHz and adjusting my interpolation accordingly so that no unintentional changes in pitch are introduced.

To do the work I’ve created a class named AdjustedSoundEffect. It has a Play() method that takes as it’s argument the factor by which the pitch should be adjusted where 1 plays the sound at the original pitch, 2 plays it at twice its pitch, and 0.5 plays it at half its pitch.

If you are interested the code I used is below.

using System;
using System.IO;
using System.Net;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Documents;
using System.Windows.Ink;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Animation;
using System.Windows.Shapes;
using Microsoft.Xna.Framework.Audio;

namespace J2i.Net.VoiceRecorder.Utility
{
    public class AdjustedSoundEffect
    {
        //I will always playback at 44KHz regardless of the original sample rate. 
        //I'm making appropriate adjustments to prevent this from resulting in the
        //pitch being shifted. 
        private const int PlaybackSampleRate = 16000;
        private const int BufferSize = PlaybackSampleRate*2;

        private int _channelCount = 1;
        private int _sampleRate;
        private int _bytesPerSample = 16;
        private int _byteCount = 0;
        private float _baseStepRate = 1;
        private float _adjustedStepRate;
        private float _index = 0;
        private int playbackBufferIndex = 0;
        private int _sampleStep = 2;

        private bool _timeToStop = false;

        private byte[][] _playbackBuffers;

        public bool IsPlaying { get; set;  }

        public object SyncRoot = new object();


        private DynamicSoundEffectInstance _dse;

        public static AdjustedSoundEffect FromStream(Stream source)
        {
            var retVal = new AdjustedSoundEffect(source);
            return retVal;
        }

        public AdjustedSoundEffect()
        {
            _playbackBuffers = new byte[3][];
            for (var i = 0; i < _playbackBuffers.Length;++i )
            {
                _playbackBuffers[i] = new byte[BufferSize];
            }
                _dse = new DynamicSoundEffectInstance(PlaybackSampleRate, AudioChannels.Stereo);
            _dse.BufferNeeded += new EventHandler<EventArgs>(_dse_BufferNeeded);
        }

        void SubmitNextBuffer()
        {
            if(_timeToStop)
            {
                Stop();
            }
            lock (SyncRoot)
            {
                byte[] nextBuffer = _playbackBuffers[playbackBufferIndex];
                playbackBufferIndex = (playbackBufferIndex + 1)%_playbackBuffers.Length;
                int i_step = 0;
                int i = 0;

                int endOfBufferMargin = 2*_channelCount;
                for (;
                    i < (nextBuffer.Length / 4) && (_index < (_sourceBuffer.Length - endOfBufferMargin));
                    ++i, i_step += 4)
                {

                    int k = _sampleStep*(int) _index;
                    if (k > _sourceBuffer.Length - endOfBufferMargin)
                        k = _sourceBuffer.Length -endOfBufferMargin ;
                    nextBuffer[i_step + 0] = _sourceBuffer[k + 0];
                    nextBuffer[i_step + 1] = _sourceBuffer[k + 1];
                    if (_channelCount == 2)
                    {
                        nextBuffer[i_step + 2] = _sourceBuffer[k + 2];
                        nextBuffer[i_step + 3] = _sourceBuffer[k + 3];
                    }
                    else
                    {
                        nextBuffer[i_step + 2] = _sourceBuffer[k + 0];
                        nextBuffer[i_step + 3] = _sourceBuffer[k + 1];

                    }
                    _index += _adjustedStepRate;
                }

                if ((_index >= _sourceBuffer.Length - endOfBufferMargin))
                    _timeToStop = true;
                for (; i < (nextBuffer.Length/4); ++i, i_step += 4)
                {
                    nextBuffer[i_step + 0] = 0;
                    nextBuffer[i_step + 1] = 0;
                    if (_channelCount == 2)
                    {
                        nextBuffer[i_step + 2] = 0;
                        nextBuffer[i_step + 3] = 0;
                    }
                }
                _dse.SubmitBuffer(nextBuffer);
            }
        }

        void _dse_BufferNeeded(object sender, EventArgs e)
        {
            SubmitNextBuffer();
        }

        private byte[] _sourceBuffer;
        

        public AdjustedSoundEffect(Stream source): this()
        {
            byte[] header = new byte[44];
            source.Read(header, 0, 44);

            // I'm assuming you passed a proper wave file so I won't bother 
            // verifying  that  the  header  is properly formatted and will 
            // accept it on faith :-)

            _channelCount = header[22] + (header[23] << 8);
            _sampleRate = header[24] | (header[25] << 8) | (header[26] << 16) | (header[27] << 24);
            _bytesPerSample = header[34]/8;
            _byteCount = header[40] | (header[41] << 8) | (header[42] << 16) | (header[43] << 24);
            _sampleStep = _bytesPerSample*_channelCount;
            _sourceBuffer = new byte[_byteCount];
            source.Read(_sourceBuffer, 0, _sourceBuffer.Length);


            _baseStepRate = ((float)_sampleRate) / PlaybackSampleRate;
        }

        /// <summary>
        /// 
        /// </summary>
        /// <param name="pitchFactor">Factor by which pitch will be adjusted. 2 doubles the frequency,
        /// // 1 is normal speed, 0.5 halfs the frequency</param>
        public void Play(float pitchFactor)
        {
            _timeToStop = false;

            _index = 0;
            lock (SyncRoot)
            {
                _adjustedStepRate = _baseStepRate * pitchFactor;
                _index = 0;
                playbackBufferIndex = 0;
            }
            if(!IsPlaying)
            {
                SubmitNextBuffer();
                SubmitNextBuffer();
                SubmitNextBuffer();
                _dse.Play();
                IsPlaying = true;
            }
        }

        public void Stop()
        {
            if(IsPlaying)
            {
                _dse.Stop();
            }
        }
    }
}