Saturday, December 16, 2017

Chromecast Audio Stopped Working

Don't know why the audio on my Chromecast suddenly stopped working, but no resetting or restarting or replugging anything made any difference.  The volume wasn't set all the way down on the Chromecast, or the TV, or the app (which was YouTube, but I'm not sure that matters), and nothing was set to mute.

The thing that _did_ make a difference was changing a setting you can only get to via the Google Home app, so here are the details.
  1. Open the "Google Home" app.
  2. Select "devices" from the main menu.
  3. Select "settings" from the popup menu for the "silent" Chromecast device.
  4. Flip on the switch in the "Display" section labeled "Use 50Hz HDMI Mode"
I didn't bother to go find out why, but that got the sound working again.

BTW, the TV to which the Chromecast is connected is a little older but not so old that the default settings on Chromecast should prevent something as basic as _sound_!!!  Thanks a lot Google!!

Anyway, with all the useless forum post chatter out there suggesting "turning it off and on again", "The IT Crowd" style, I thought it might be useful to post an answer that actually did work.


Tuesday, October 17, 2017

ArchLinux + Raspberry Pi Zero W + "Mopidy & MusicBox & SpotifyPlugin" (as a Service)


The info about how to turn a Raspberry Pi Zero W into a tiny, headless, web-browser controlled player for a Spotify premium account is scattered in too many places, and a little painful to find, so I'm bringing it all together here.

High Level Overview

  • Pre-requisite - Raspberry Pi Zero W (wireless) with ArchLinux installed and updated, with the WiFi configured and connected (not covering that stuff here).
  • Pre-requisite - A Spotify premium (paid) account.
  • Get audio working over HDMI output
  • Get ALSA installed as the "sink" for MPD
  • Set up yaourt to help with installs from AUR
  • Install mopidy and mopidy-musicbox
  • Install mopidy-spotify
  • Tweak permissions and config so it will all work

Set up Raspberry Pi Zero W with ArchLinux and WiFi

Ok, fine, at least a few links in case that's the only thing keeping you from the rest of this.
  • https://archlinuxarm.org/platforms/armv6/raspberry-pi
  • https://gist.github.com/andreibosco/8246142

Get audio working over HDMI

The Raspberry Pi Zero W doesn't have an audio jack, so the cheapest, easiest way to connect audio is through an HDMI input on a TV or Blu-Ray player, to built-in or external speakers, headphones, etc.

Add these lines to /boot/config.txt and reboot the Pi.
dtparam=audio=on
hdmi_drive=2

Get ALSA installed as the "sink" for MPD

pacman -S alsa-firmware alsa-lib alsa-plugins alsa-utils alsaplayer

Test HDMI Audio

To see whether there is actually a sound device activated, this command...

aplay -l

     ...should show output like...

**** List of PLAYBACK Hardware Devices ****
card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA]
  Subdevices: 7/8
  Subdevice #0: subdevice #0
  Subdevice #1: subdevice #1
  Subdevice #2: subdevice #2
  Subdevice #3: subdevice #3
  Subdevice #4: subdevice #4
  Subdevice #5: subdevice #5
  Subdevice #6: subdevice #6
  Subdevice #7: subdevice #7
card 0: ALSA [bcm2835 ALSA], device 1: bcm2835 ALSA [bcm2835 IEC958/HDMI]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

With the sound-capable HDMI device connected to the Pi, test sound output with this command...

aplay /usr/share/sounds/alsa/Front_Center.wav

Set up yaourt to help with installs from AUR

First, install the development tools and libraries required to compile code and create packages.

      pacman -S base-devel binutils yajl wget

Install package-query from AUR.
Change to a non-root user (in case you're logged in as root, or currently in a "su" session).  makepkg cannot be run as root. (But, you could try it if you want to see the error messages for yourself.)
      su alarm
Change to the home directory of the alarm account.
      cd
 Create a directory to download and unpack build script/resources.
      mkdir install && cd install
Download the package build start script.
      wget https://aur.archlinux.org/cgit/aur.git/snapshot/package-query.tar.gz
Unpack it
       tar -xzvf package-query.tar.gz
Change to the unpack directory.
       cd package-query
Run makepkg.
       makepkg -Acs
Install the AUR package just retrieved and built.
       su
       pacman -U package-query-1.9-2-armv6h.pkg.tar.xz

Install yaourt from AUR
(Same steps as above for package-query, but for yaourt.)
      su alarm
      cd ~/install
      wget https://aur.archlinux.org/cgit/aur.git/snapshot/yaourt.tar.gz
      tar -xzvf yaourt.tar.gz
      cd yaourt
      makepkg -Acs
      su
      pacman -U yaourt-1.9-1-armv6h.pkg.tar.gz

Install mopidy and mopidy-musicbox

  • Note: With yaourt installed, the remaining AUR installs are a little easier.
  • Note: These installs must be run from a non-root user, but they'll prompt for the root password when they reach the point where the package is built and needs to be installed.
  • Note: The default prompt-answers during these scripts are usually what you want, except for the offer to edit/modify the build.  You probably want to answer 'n' (no) to that one unless you really want to tweak the instructions in the package build, or just want to open it in an editor and take a look at what it does.

      yaourt -S mopidy
      yaourt -S mopidy-musicbox
      yaourt -S mopidy-spotify

  • Note: If any of these go sideways midway (like maybe you accidentally hit something other than the enter key at one of the prompts), just run them again and they'll pick up more-or-less where they left off.

Tweak permissions and config so it will all work


No installer ever seems to get everything quite right, so here's the rest.

Create the "mopidy" user.

(Assuming one of the installers didn't already do this.) 

useradd --create-home --groups audio mopidy

IMPORTANT: Don't miss the extra group audio or, later on, the mopidy user won't have permissions to use sound devices.

Change the owner of the mopidy lib, log, and cache directories

These files must be owned by mopidy:mopidy (or the mopidy service won't be able to write stuff into them, because it will be run as the mopidy user).
      chown -R mopidy:mopidy /var/lib/mopidy
      chown -R mopidy:mopidy /var/log/mopidy
      chown -R mopidy:mopidy /var/cache/mopidy

Add a systemd service for mopidy


  • Edit /etc/systemd/system/mopidy.service
[Unit]
Description=Mopidy
After=network.target

[Service]
User=mopidy
ExecStart=/usr/bin/mopidy --config=/etc/mopidy/mopidy.conf
Restart=on-abort

[Install]
WantedBy=multi-user.target
  • systemctl enable mopidy
  • systemctl start mopidy
  • systemctl status mopidy
  • WARNING: Since mopidy runs as the mopidy user, it may generate an empty mopidy.conf file in /home/mopidy/.config/mopidy/ which will interfere with the config settings in /etc/mopidy/mopidy.conf.
    • If things don't seem to be operating as configured, leave the .../.config/mopidy/mopidy.conf file there but delete all of its contents.

Bind mopidy's listeners to all network interfaces

The default is to just listen on 127.0.0.1 but that isn't very useful if you want to control mopidy using the MusicBox web interface.
  • Note: This assumes the Pi will be accessible via private LAN only.
  1. edit /etc/mopidy/mopidy.conf
  2. Add
    [mpd]
    hostname = 0.0.0.0
    [http]
    hostname = 0.0.0.0

Add config for the spotify plugin.

First, authorize the mopidy-extensions to access your premium Spotify account at:
     https://www.mopidy.com/authenticate/#spotify
Then edit /etc/mopidy/mopidy.conf again and add (substituting your credentials, id,
secret, etc. for the fake examples given here as an example of what it would look like).
     [spotify]
     username = myspotifyid
     password = mypw
     client_id = eieio1d7d-fefe-4ccc-8ccc-404040ff10f
     client_secret = HhRR2eeMQ0jRsX77fm62ma5Kw2rABCDeFG-HIJKLMNOP=


Final restart and Tryout

  • systemctl restart mopidy
  • systemctl status mopidy(check for errors)

Point a browser, to the Pi's ip address on port 6680 and click the link to switch over to MusicBox (something like this).
    
      http://192.168.0.10:6680


Conclusion


Let me know if anything major is missing.  I tried to get all the necessary steps in here with enough detail to get it done, but not so much extra noise.  I couldn't find any setup guide that was focused on getting mopidy set up with ArchLinux on a Raspberry Pi Zero W, but IMO, mopidy is the perfect single-app use of a Pi Zero W, and ArchLinux is a bit better than Raspbian for a headless setup, so I'm assuming someone else out there feels the same.  Also leave a comment if this helped you make your own Music streamer/player out of a Pi Zero W.  HTH.

Tuesday, September 19, 2017

OpenWRT - Making Wired DLNA Server Visible to WiFi Streaming Device

Overview

This article describes how to solve the issue described by these circumstances:
  • You have a DLNA media server connected to the "Wired"/LAN side of a router running OpenWRT.
  • You have a media-player / streaming-client (like a Roku) connected to the WiFi side of the same router.
  • Your player can't find the media server.
  • You've already tried disabling multicast_snooping and that didn't help.
Note: For me, this applies to OpenWRT Chaos Calmer, running on a TP-Link Archer C7, but it probably affects others as well.

What's Really Wrong

That heading is a little misleading.  Part of what's wrong is actually the multicast_snooping thing.  So you still need to do that part, per: https://wiki.openwrt.org/doc/recipes/dumbap#multicast_forwarding
  •  echo "0" > /sys/devices/virtual/net/br-lan/bridge/multicast_snooping
  • Also add that same command to /etc/local.rc so it survives a reboot.
According to the docs on the OpenWRT site and several other places, that's all you should need to do, but the OTHER part of what's wrong, that isn't really mentioned anywhere, is that iptables is probably dropping the multicast packets before they can even be "snooped."

Steps to Get the "Other" Part Fixed

  1. Install support for iptables/firewall rules based on packet-type. (from an ssh prompt)
    • opkg install iptables-mod-extra
    • See: https://wiki.openwrt.org/doc/howto/netfilter#opkg_netfilter_packages
  2. Add a custom rule to your firewall configuration.
    • In the Luci web interface, that's under the Network->Firewall menu, in the "Custom Rules" section, or from an ssh prompt, edit (e.g. open with vi or vim) /etc/firewall.user
    • The rule:
      • iptables --insert forwarding_rule -m comment --comment "allow multicast from wired to wireless interfaces" -m pkttype --pkt-type multicast -j ACCEPT
  3. Restart the firewall (from an ssh prompt)
    • /etc/init.d/firewall restart
    • Assure that there are no errors like "Couldn't load match `pkttype'"

Summary

After disabling multicast_snooping and adding the firewall rule to allow multicast packets to pass from anywhere to anywhere else, the DLNA server, connected via ethernet/wired should show up immediately on streaming devices connected to WiFi.

Update

Even with the firewall open for all multicast packets, this was still flaky and intermittent.  Then it occurred to me that all of my wired devices were hooked into a Netgear GS116Ev2 - 16-Port Gigabit ProSAFE Plus Switch.  That switch ALSO had an IGMP/MultiCast Snooping feature, and it was ALSO enabled by default.  After turning that off AND disabling multicast_snooping in OpenWRT, the DLNA media server pops right up in the Roku player every time.

References

* https://forum.openwrt.org/viewtopic.php?pid=198895#p198895
* https://wiki.openwrt.org/doc/recipes/dumbap#multicast_forwarding
* https://dev.openwrt.org/ticket/13042
* https://www.garyhawkins.me.uk/dlna-upnp-and-multicast-routing/


Compatibility-Based JSON Schema Versioning

Overview

In a corporate environment, the task of centralizing the "enterprise" data model has had its challenges.  Communicating the definition of what a data object looks like has been rather inflexible with some popular technologies like XML-Schema or awkwardly mismatched with the needs of end-applications using relational databases.  JSON document encoding has become popular for transporting and storing application data but it is often prone to problems because the common methods of defining what should be expected in a JSON document (structure, field names, etc.) are somewhat haphazard and weak.  JSONSchema goes a long way towards satisfying the need to explicitly define JSON content, but it is still a challenge to implement a process that provides a useful data-document definition that can support meaningful data-validation, while retaining JSON's agile, freely-changeable roots.  This describes a possible approach to getting the best of both worlds, by implementing processes around JSON Schemas that can achieve flexibility and clearly defined data-documents at the same time.

Definitions

Since there is some confusion about what means what in the world of JSON data, let's get a few terms clear up front.
"Consumer-View" JSON Schema - an artifact, meant to be published for use by consumers of the corresponding data (i.e. application developers), that describes what can be expected in a JSON document that complies with the schema.  Unlike a database or XML schema, there isn't an expectation that this FULLY describes the document, just that the document should match what actually is defined in the schema.
"Producer-View" JSON Schema - a schema artifact, meant to be used internally (i.e. not published for consumers), that exactly defines every detail of a concrete JSON Document.
JSON Document - a "data document" encoded in JSON, that, if it is advertised as compliant with a particular JSON Schema, should at least include data matching the field names and structures defined in that JSON Schema.

The Problem

The "desired" definition of a data document changes over time.  Attribute names change. Data types might be altered.  New stuff is included.  Old stuff disappears.  The organizational structure of the data gets deeper or flatter.  Also, if multiple projects require  different changes to a data document they use in common, at the same time, it becomes VERY difficult to manage release timing and cross-compatibility.  If there must be only one JSON Schema that defines what an actual JSON document looks like, that one JSON Schema will end up having impossible constraints in order to meet everyone's needs.

The Typical, Rigid, "One-Schema" Approach

A common strategy for defining a JSON Document is to lock it together with one and only one JSON Schema.  In other words, this demands that everything that is defined in the schema, must be represented exactly that way in the document.  Nothing more.  Nothing less.  This comes with all sorts of concerns and frustrations about when something can be added, and/or whether anything can ever be renamed or removed.  If an old application were written using a previous version of the JSON Schema, changing anything besides adding more fields either breaks that old application or requires it to be updated.  This also implies the need to keep all instances of the JSON Document in perfect synch with the JSON Schema that defines the document.

The Proposed Solution

Freely change, or "version," the "consumer-perspective" JSON Schema as often as necessary, and in ways that would not be permitted if it were rigidly mapped one-to-one with a JSON Document, retain all previous versions of the JSON Schema in a published catalog, and include, in each JSON Document, a list of which versions of the JSON Schema it still supports.  Then, separately, if desired, create a "producer-view JSON Schema" to rigidly define an actual JSON document, and "version" that separately.

Detailed Example (Bookstore Theme)


1st Published JSON Schema Version - Everything Starts Out One-to-One

JSON Schema - One field named bookTitle

{
    "title": "Book",
    "type": "object",
    "version": "X",
    "properties": {
        "bookTitle": {
            "type": "string"
        }
    }
}

JSON Document - Complies with one version (X) of JSON Schema

{
    "jsonSchemaVersions": ["X"],
    "bookTitle": "Hitchhiker's Guide to the Galaxy"
}


2nd Published JSON Schema Version - Add Field - Nothing Complicated Yet

JSON Schema - One new field named isbn

{
    "title": "Book",
    "type": "object",
    "version": "ProjectISBN",
    "properties": {
        "bookTitle": {
            "type": "string"
        },
        "isbn": {
            "type": "string"
        }
    }
}

JSON Document - Complies with both published versions of JSON Schema

{
    "jsonSchemaVersions": ["X", "ProjectISBN"],
    "bookTitle": "Hitchhiker's Guide to the Galaxy",
    "isbn": "0345391802"
}

  • Note: The JSON Document still complies with JSON Schema version "X", because all it requires it the "bookTitle" field... and it's still in the document, still has the same name, etc.

3rd Published JSON Schema Version 3 - Oops, ISBN wasn't Quite Right

JSON Schema - Replace "isbn" with Separate Fields for ISBN-10 and ISBN-13

{
    "title": "Book",
    "type": "object",
    "version": "ISBN-FIX",
    "properties": {
        "bookTitle": {
            "type": "string"
        },
        "isbn10": {
            "type": "string"
        },
        "isbn13": {
            "type": "string"
        }
    }
}

JSON Document - Duplicates Some Data to Remain Compliant with both "ProjectISBN" and ISBN-FIX JSON Schema Versions (... for now).

{
    "jsonSchemaVersions": ["X", "ProjectISBN", "ISBN-FIX"],
    "bookTitle": "Hitchhiker's Guide to the Galaxy",
    "isbn": "0345391802",
    "isbn10": "0345391802",
    "isbn13": "978-0345391803"
}


  • Note: This document has everything it needs for all JSON Schemas published so far.  However, any application that is using the "isbn" field just got notified that it may not be around forever.
  • Note: This illustrates a little more clearly how the JSON Document can satisfy the requirements of previous JSON Schema versions, without the "latest" JSON Schema rigidly defining everything in the document.  This JSON Schema does not define old field "isbn" but the document still has it in order to still support the "ProjectISBN" version of the "Book" Schema.


4th and 5th Published JSON Schema Versions - Concurrent Projects

JSON Schema - Add Fields to Support Selling Books

{
    "title": "Book",
    "type": "object",
    "version": "ProjectSellBooks",
    "properties": {
        "bookTitle": {
            "type": "string"
        },
        "isbn10": {
            "type": "string"
        },
        "isbn13": {
            "type": "string"
        },
        "cost": {
            "type": "number"
        },
        "price": {
            "type": "number"
        }
    }
}

Another JSON Schema Published Independently, at the Same Time - Add Fields to Support Inventory Management

{
    "title": "Book",
    "type": "object",
    "version": "ProjectInventory",
    "properties": {
        "bookTitle": {
            "type": "string"
        },
        "isbn10": {
            "type": "string"
        },
        "isbn13": {
            "type": "string"
        },
        "countOnHand": {
            "type": "integer"
        },
        "countOnOrder": {
            "type": "integer"
        }
    }
}

JSON Document - Adds Support for BOTH Projects, Independently - Also Drops ProjectISBN Compliance ("isbn" field is gone now)

{
    "jsonSchemaVersions": ["X", "ISBN-FIX", "ProjectSellBooks", "ProjectInventory"],
    "bookTitle": "Hitchhiker's Guide to the Galaxy",
    "isbn10": "0345391802",
    "isbn13": "978-0345391803",
    "cost": 5.05,
    "price": 7.99,
    "countOnHand": 20,
    "countOnOrder": 10
}


  • Note: This document still has everything it needs for most previously published JSON Schema versions as well as both of the two new ones.  Notice that the two new JSON Schemas do not need to include each other's added fields.  The independent schema changes ONLY affect the document.
  • Note: The jsonSchemaVersions list no longer has "ProjectISBN" because the document no longer supports everything the "ProjectISBN" schema included (i.e. the "isbn" field).  The app developers were warned this was coming!!



Latest Published JSON Schema Version - Single New Project Additions + Cleanup

JSON Schema - Pull Multiple Previous JSON Schemas Together, and Add a few Things

{
    "title": "Book",
    "type": "object",
    "version": "Book2.0",
    "properties": {
        "bookTitle": {
            "type": "string"
        },
        "isbn10": {
            "type": "string"
        },
        "isbn13": {
            "type": "string"
        },
        "cost": {
            "type": "number"
        },
        "price": {
            "type": "number"
        },
        "countOnHand": {
            "type": "integer"
        },
        "countOnOrder": {
            "type": "integer"
        },
        "coverImageLink": {
            "type": "string"
        },
        "synopsis": {
            "type": "string"
        },
        "author": {
            "type": "string"
        }
    }
}

JSON Document - Everything that was Published Before, and then Some...

{
    "jsonSchemaVersions": ["X", "ISBN-FIX", "ProjectSellBooks", "ProjectInventory", "Book2.0"],
    "bookTitle": "Hitchhiker's Guide to the Galaxy",
    "isbn10": "0345391802",
    "isbn13": "978-0345391803",
    "cost": 5.05,
    "price": 7.99,
    "countOnHand": 20,
    "countOnOrder": 10,
    "coverImageLink": "http://mybookstore.example.com/images/covers/img0345391802.jpg",
    "synopsis": "The answer to life, the universe, and everything, is 42.",
    "author": "Douglas Adams"
}

  • Note: This document still identifies all of the previously published JSON Schema versions it supports, and any application that was coded against any one of those listed should still find the fields it knows about, right where they should be.

Producer-View JSON Schema

One of the main things that seems to aggravate the process of modeling data-documents that are shared by multiple consumers is the lack of separation between the "consumer-view" of the data and the "producer-view" of the data.  Back up in the "definitions" section, there are two different JSON Schema artifacts defined.  The example doesn't say much (or maybe anything at all) about the "Producer-View JSON Schema"  That's because the example focuses on the primary reason for defining JSON Documents, which is the application-end / consumer-view.

Part of this proposed solution is to stop trying to combine them.  Each of the JSON Document examples above, except for the very first one, didn't match up exactly to the entire set of JSON Schema documents.  In some cases like the multiple, independent changes for concurrent project. The actual JSON document wouldn't have exactly matched any single JSON Schema.  This fact exposes the need for an internal-use-only "super-schema", or "producer-view JSON Schema" that exactly defines the content of a document that satisfies the requirements of all of its supported "consumer-view JSON Schemas."  While it isn't strictly necessary to create this Schema document, having it would help to communicate with the "back-office" developers who need to know what the actual super-set document needs to have in it.

Summary

This approach to JSON data modeling resolves a few perplexing challenges.  It sets aside the need to keep a single data-document definition (schema) in lock-step with the document "instance" that satisfies the requirements of that definition.  It also identifies the opportunity to proceed with the concerns of a data-consumer separated from the concerns of a data-producer.  Finally, it alleviates the "backwards compatibility" burden by reducing it to just "compatibility" with published versions of a JSON Schema, never mind whether they were published before, after, or at the same time as any other JSON Schema with which a JSON document may also be compatible.






























Monday, June 19, 2017

Connecting to Cassandra Cluster via SSH Tunnels with the DataStax Java Client/Driver

Introduction

This is probably a little obscure, but if you have only one choice for connecting into a remote environment, like AWS, and that happens to be an SSH connection with tunnels to a "jump box", and you need to connect to a Cassandra cluster using the DataStax driver, I suspect that's why you found this, so read on.

The problem is...

DataStax wrote their Java driver to use Netty instead of using the core network connection classes in a typical Java virtual machine.  Netty is written to use Java's NIO API.  NIO does not recognize the JVM-wide settings like socksProxyHost, so it always attempts to make a direct connection to whatever host/port the Java code says.

The other part of the problem is...

Connecting the DataStax client/driver to one node of a Cassandra cluster results in a handshake that retrieves network information for the other nodes in the cluster and tries to open additional connections.  If the primary connection is established via an SSH tunnel, the network information for the rest of the cluster nodes is likely to still be routable only within the remote environment.  That doesn't work even if you created additional SSH tunnels.

The solution (in a nutshell)...

Create tunnels for all of the cluster nodes, and register an instance of the DataStax AddressTranslater when the connection to Cassandra is opened.

The solution (details)...

The JSCH library makes it somewhat easy to open an SSH connection with tunnels.

Assuming tunnelDefinitions is a collection of simple TunnelDefinition POJOs to contain a set of attributes for a local- to-remote host/port mappings.
A three node cluster might have mappings with bindAddress:localPort:remoteHost:remotePort like:

  • 127.0.0.1:19042:cassandra-cluster-node1:9042
  • 127.0.0.1:29042:cassandra-cluster-node2:9042
  • 127.0.0.1:39042:cassandra-cluster-node3:9042
public void connect(String jumpUserName, String sshPrivateKeyFilePath, String jumpHost, int jumpPort) {
    this.jumpHost = jumpHost;
    this.jumpPort = jumpPort;
    jsch = new JSch();
    try {
        LOGGER.info("Using SSH PK identity file: " + sshPrivateKeyFilePath);
        // Point to the PK file for authentication        jsch.addIdentity(sshPrivateKeyFilePath);
        LOGGER.info("Opening SSH Session to Jumpbox: " + jumpHost + ":" + jumpPort + " with username " + jumpUserName);
        session=jsch.getSession(jumpUserName, jumpHost, jumpPort);
        Properties config = new java.util.Properties();
        config.put("StrictHostKeyChecking", "no");
        session.setConfig(config);
        session.connect();
        for (TunnelDefinition tunnelDefinition : tunnelDefinitions) {
            // Note: Each call to "set" is actually an "add".
            // Note: The bind addresses are typically localhost or 127.0.0.1.
            session.setPortForwardingL(tunnelDefinition.bindAddress, 
                tunnelDefinition.localPort, tunnelDefinition.remoteHost, 
                tunnelDefinition.remotePort);
        }
    } catch (JSchException e) {
        e.printStackTrace();
    }
}

Then, using the same tunnelDefinitions to implement DataStax AddressTranslater...
AddressTranslater customAddressTranslater = new AddressTranslater() {
    private SshTunnelHelper sshTunnelHelperRef = sshTunnelHelper;
    private Map<String, InetSocketAddress> translationMappings = new HashMap<>();

    @Override    public InetSocketAddress translate(InetSocketAddress inetSocketAddress) {
        // Lazy Load        if (translationMappings.isEmpty()) {
            for (SshTunnelHelper.TunnelDefinition tunnelDefinition : sshTunnelHelper.getTunnelDefinitions()) {
                InetSocketAddress local = new InetSocketAddress(tunnelDefinition.bindAddress, tunnelDefinition.localPort);
                InetSocketAddress remote = new InetSocketAddress(tunnelDefinition.remoteHost, tunnelDefinition.remotePort);
                String mappingKey = remote.toString();
                LOGGER.info("Registering Cassandra Driver AddressTranslation mapping with key: '" + mappingKey + "'");
                translationMappings.put(mappingKey, local);
            }
        }
        // Note: The result of InetAddress.toString() has a leading "/"        String keyToMatch = inetSocketAddress.toString();
        LOGGER.info("Cassandra driver is attempting to establish a connection to: '" + keyToMatch + "'");
        InetSocketAddress matchingAddressTranslation = translationMappings.get(keyToMatch);
        if (matchingAddressTranslation != null) {
            LOGGER.info("Matched address translation from config properties for: " + inetSocketAddress.getAddress().toString());
            return matchingAddressTranslation;
        } else {
            LOGGER.info("Retaining unmatched InetSocketAddress: " + inetSocketAddress.toString());
            return inetSocketAddress;
        }
    }
};

The connection to the Cassandra cluster can then be established with the AddressTranslater...
Note: Even if the Cluster object is built with an AddressTranslater, the initial contact point must be manually translated first:
InetSocketAddress initialContactPoint = new InetSocketAddress("cassandra-cluster-node1", 9042);
InetSocketAddress initialContactPointTranslated = addressTranslaterWrapper.translate(initialContactPoint);
LOGGER.debug("Initial contact point (translated): " + initialContactPointTranslated.toString());
Set<InetSocketAddress> initialContactPoints = new HashSet<>();
initialContactPoints.add(initialContactPointTranslated);
final Cluster cluster = Cluster.builder().withAddressTranslater(addressTranslaterWrapper).addContactPointsWithPorts(initialContactPoints).build();
final Session session = cluster.connect("mykeyspace");

Monday, March 6, 2017

Arduino OLED BitMap Animation

Summary

On occasion, I bump up against a little tech challenge that just ticks me off enough that I won't let go until I have defeated it.  While making a special purpose remote-control for a camera aimer, I wanted to use a tiny, inexpensive OLED as a feedback indicator showing which direction the remote device was pointing.  I thought it would be simple enough to display a little bitmap depiction of the camera, rotated to correspond with the direction of the actual camera.  However, it wasn't that simple.

Challenges

  • Creating the initial bitmap was a bit tedious. (...to me anyway.  I suspect my only real solution for that would be more artistic talent.)
  • Converting the bitmap to C++ code required some web searching
    • Found option 1 (online): http://manytools.org/hacker-tools/image-to-byte-array/
    • Found option 2 (Windows): http://en.radzio.dxp.pl/bitmap_converter/
  • Displaying a rotated bitmap wasn't part of the library API for the OLED display
    • and it isn't trivial to just write a rotation function
      • https://forum.arduino.cc/index.php?topic=420182.0
    • and it isn't really quick enough
      • see post #12 of the previous forum thread.
    • and I doubt the result would have looked very good anyway.
  • Each 64x64 bitmap requires about 1/2 KB of the limited 32 KB program memory on an Arduino (ouch).
    • so I realized I'd have to compromise and only include a bitmap for each 10 degree increment, using a total of about 18 KB (36 images @ 0.5 KB each).
      • as it turns out, that's probably good enough, but it's still a trade-off.  I would have preferred a little more granularity.

Abandoned the First Attempt to Create all 36 Bitmaps

After deciding that using individual bitmaps encoded as a C++ char array was really the most practical option, I started doing the rotation task in Photoshop.  The process was promising to be very tedious.  I don't like tedious.  Even after transforming and saving each 10 degree rotation as a separate image, I would still need to upload every image file, one at a time, to the "image-to-byte-array" web site to convert it to C++ code.  The Photoshop processing could have been done with a recorded macro I guess but it was taking about 10 minutes to scale, rotate, color-reduce, and clean up extraneous bits.  I really didn't want to spend the next 5 hours doing the rest of the images this way, so I spent a few hours trying to find another way.

ImageMagick to the Rescue

After a short time, I remembered a command-line tool that I have found very handy for tasks like this in the past, ImageMagick.  While I was reading the ImageMagick docs, examples, and forum-posts explaining how to rotate an image, which, frankly, was all I had expected I'd get from the command line tool, I noticed that it was capable of doing a reasonably good job of interpolating the right pixels for a 2-color off-center rotation of the bitmap too (using the Scale Rotate and Translate / SRT function).  I was then really excited to find that ImageMagick could convert an image file to a C/C++ header file.  After a bit more web searching for various examples, I managed to boil the whole process down to 3 ImageMagick commands to produce a header file (C/C++ code) for each rotated image. 

The commands are (using a 10 degree rotation as an example):
  1. magick original_bitmap.png -antialias -interpolate Spline -virtual-pixel transparent -size 64x64 -distort SRT 10 rotated_10_deg_bitmap.png
  2. magick rotated_10_deg_bitmap.png -channel alpha -auto-level -threshold 50% two_color_10_deg_bitmap.png
  3. magick two_color_10deg_bitmap.png -define h:format=gray -depth 1 -size 64x64 -alpha extract bitmap_10deg.h
Using a Windows batch/cmd script (which was easier than writing a *nix shell script since I was on a Windows machine anyway), I could have a script quickly produce the full set of header files.  Using the "for /L" command and inserting variable references in a few key places, the script loops through the 10-degree increments and creates a C/C++ char array with hex- encoded (i.e.  0x0E, 0x00, etc.) data, representing each image.

All that was required to finish automating the process was to:
  • add a few lines for #ifndef, #define and #endif (to avoid build issues with multiple includes),
  • and use a Windows port of the "sed" command to customize the default variable declaration (static const unsigned char MagickImage[]) with a distinct name and extra keywords (PROGMEM).

Other Possibilities

Before moving on to the actual example script, it's worth noting that image rotation isn't the only way to use ImageMagick to "pre-formulate" bitmaps for an OLED (or other single color displays).  ImageMagick is capable of a multitude of other "distortions" to show movement or perceived effects like 3D flipping.   If rotating an image isn't exactly what you want, you may find your answer by reading through documentation pages like this one: http://www.imagemagick.org/Usage/distorts/

The final Windows command script is as follows:

@echo off
set MAGICK_CMD=c:\win32app\ImageMagick-7.0.5-Q16\magick.exe
set SED_CMD=c:\win32app\unixgnu\sed.exe
set HEADER_OUT_DIR=..\

for /L %%i in (0,10,350) DO (
    %MAGICK_CMD%
original_bitmap.png -antialias -interpolate Spline -virtual-pixel transparent -size 64x64 -distort SRT %%i rotated_%%i_deg_bitmap.png
    %MAGICK_CMD%
rotated_%%i_deg_bitmap.png -channel alpha -auto-level -threshold 50%%
two_color_%%i_deg_bitmap.png

     %MAGICK_CMD% two_color_%%i_deg_bitmap.png -define h:format=gray -depth 1 -size 64x64 -alpha extract bitmap_%%i_deg.h
    echo #ifndef ICON%%i > %HEADER_OUT_DIR%\bitmap_%%i_deg.h
    echo #define ICON%%i >> %HEADER_OUT_DIR%\bitmap_%%i_deg.h
    %SED_CMD% -e "s/char/char PROGMEM/g; s/MagickImage/bitmap_data_%%i/g"
bitmap_%%i_deg.h >> %HEADER_OUT_DIR%\bitmap_%%i_deg.h
    echo #endif >> %HEADER_OUT_DIR%\
bitmap_%%i_deg.h
)

Notes on Magick command options used:

Some of these explanations are not quite right.  This represents the best understanding I had time to obtain, so if any of it is a bit off, please leave a comment with a better explanation.
  • Converting from original PNG (saved "For Web and Devices" from PSD file in photoshop as 2-color PNG8) to rotated PNG
    • -antialias produces an image that has fuzzy edges that are a better approximation of what the rotated image should look like
    • -interpolate Spline gives the best results for translating the lines and spots in the original image
    • -virtual-pixel transparent fills in the alpha-channel transparency for pixels that are set on an edge (instead of the pixel's color)
    • -size 64x64 saves dimensional info in the output image so the next step doesn't whine about %h and %w being missing
    • -distort SRT is the number of degrees to "scale rotate translate" which basically accomplishes an in-place rotation without clipping
  • Converting from rotated PNG to the BW (black an white) PNG
    • -channel alpha tells ImageMagick to use the alpha channel instead of one of the color channels to pick the output pixels
      This is necessary because the rotated image is essentially a gray-scale image with a transparent background
    • -threshold 50% yields a good final pixel on/off choice, based on the transparency/alpha values.
  • Converting from the BW PNG to the C/C++ header file
    • -define h:format=gray tells ImageMagick to output to just image bit data bytes without GIF or PNG header info included
    • -depth 1 constrains the output to 1 bit per pixel as required for the OLED (each pixel is either on or off)
    • -size 64x64 (may not be required  TODO: experiment)
    • -alpha extract tells ImageMagick to use only the alpha channel info in the PNG instead of every color channel.

Conclusion

What would have been a lot of tedious work creating derivative images, with Photoshop (or a similar image editor) and various other online/GUI based tools, was accomplished with a bit of scripting and a spectacularly useful (and free) command line tool.  Hope this comes in handy for something you're working on.  Please leave a comment and let me know if you found it useful.