myCoreDump

Introduction

I hope you enjoy this core dump! The thoughts are so interrelated and connected it is difficult to optimize the presentation so you may need to apply your own defragmentation to get it. In addition, the order is not intended to indicate priority, it is all freaking important!

University API (UAPI)

When acquiring or developing an application it must have an API, and preferably a RESTful one. If the function of the application is core to university business then it should be exposed through the UAPI. If it is not a core function of most, if not all, educational institutions, we should expose the API through our API management tools, but it shouldn’t be part of the UAPI.

Personal API (PAPI)

When we build a system that will store personal / individual information we should consider how we might leave the information in the hands or possession of the individual and access it for our use through their personal API. Since no one yet has a personal API, for the time being we must provide that as well. This will require you to stretch your imagination and creativity, but that’s good for you.

Domain-Driven Design

Everyone should read at least the first two chapters of the book Implementing Domain-Driven Design by Vaughn Vernon! The super short summary – bring domain experts and developers together to create a ubiquitous language that is embedded in the code itself. In addition, define or determine bounded contexts wherein this language is valid. Without this you won’t understand how we’re going to build solutions and you won’t have a clue what is in and what is not in a microservice. Read it!

Microservices

Microservices are an architectural style that will be used at BYU to create larger systems. Systems built using microservices are loosely coupled, I would even go as far as saying they are highly decoupled, they implement a single business capability, they have well defined interfaces, and communicate using only these interfaces. The size of a microservice is governed by the size of the associated bounded context, go and read the DDD book! At BYU an important part of a microservices’ interface is its ability to raise events. Go figure out why.

Event-Driven Architecture (EDA)

Systems that poll are inefficient! Build systems that raise events so other systems don’t have to waste time and resources. You can keep asking me if you have to do this, but you can be assured that when I change my mind I’ll let you know. If you didn’t find the humor in the last sentence then go read the links again.

Application Acquisition

When we purchase applications we should give preference, strong preference, to those that run in the cloud. In fact, before we choose an application that is not available as a service choose someone in your group you don’t love and care about to come get my approval.

When we build services or applications they will run at Amazon and use the most abstract service offerings that make sense. In other words, we should not instantiate EC2 servers and S3 storage and then build queues, notification services, etc., but instead should use services such as SQS, SNS, Lambda Functions, etc.

DevOps

DevOps is a culture and practice that we hope will result in the rapid development, testing, and deployment of software. We are measuring the number of deployments / week, failures / week, and time to recovery. We are promoting small changes, thorough automated testing, and deployment to production often. Your team (the DDD team) is in charge and responsible for the functionality, performance, and reliability of “your” product. 

If those in the hardware world think you’re off the hook, think again. Software is eating the world, software is eating your world. The days of interacting with network switches, routers, firewalls, etc. are over. Learn to program, learn to configure hardware devices using programs, learn to use DevOps to configure, test, and deploy hardware platforms as rapidly as “other” developers – that’s right, you just became developers!

Where to Compute

In the past we built data centers and populated them with servers, storage systems, and network components. As CPU performance increased computers became more able to run multiple applications, but stability due to unintentional application interaction made this approach intolerable.

We found ourselves with many underutilized servers running single applications to maintain reliability. Along came server virtualization enabling us to instantiate multiple virtual servers on each physical server. Over the past several years the number of physical servers has diminished considerably.

Well, it is time for another paradigm shift. We are now embarking on a journey that will result in our compute and storage being somewhere else. We will take advantage of Amazon to deliver what our applications and services need to run. Acquired applications will also run in the “cloud”. in either case they will not be housed here. Resources used previously to purchase servers and storage, and maintain them will be redirected to this new endeavor.

Networks

Unlike server and storage, I believe we will have a wired and wireless network on campus for the foreseeable future. However, the way we deploy, configure and maintain these networks will change drastically. Remember, software is eating the world and networking is not an exception to the rule. Network components will be physically installed in some generic way and then configured remotely via software.

In a DevOps fashion, when a problem occurs you figure out what went wrong in the configuration script, you repair the script, you test the script, and you redeploy. Remember, we’ll be watching how often you deploy, how many failures occur, and how long it takes to recover.

The days of hugging these devices are over. If you want one to hug, you can have one of the old ones and keep it in your office – disconnected from the network of course.

Domain of Ones Own (DoOO)

As we embark on this new path it is a great time for you to consider contributing to the content of the Internet. Let your light so shine by getting a domain of your own and sharing your goodness and skills with others. get one at domains.byu.edu. Here you can blog your greatest thoughts, post content that you syndicate to Facebook, Twitter or other services. Go learn, learning is fun!

We are offering this service to all students because we believe they should understand more about how the Internet works. We believe they have much to offer the world and they need to know they can share it with little help from service providers. What they build is transportable to other hosting services and is theirs! In the future a DoOO will enable an individual to have a portfolio and expose this and much more through their personal API (PAPI).

Final Thoughts – For Now!

We have a great team! Let’s pursue all of this FUN with the greatest enthusiasm and Heaven will shine down on us. Let us share our best thinking with others: share code on github, answer questions on stackoverflow, blog about your experiences, publish papers, present at conferences, participate on panels. In short, learn, teach one another, and teach the world!

myDoorbell: A Learning Adventure

Introduction

After being a university chief information officer (CIO) for more than a decade, I decided to refresh my technical skills acquired through formal education and practice as an electrical engineer. I learn best by doing, so I picked a project I was interested in pursuing with the end goal being the learning, and not the finished product. I intend to share several posts that I hope illustrate the things learned and hope they are of value to the reader.

My Project

I have interest in the Internet of Things (IoT) movement and wanted to make strides towards making this practical, simple, and secure. I believe connected devices should be simple and consume little power. This likely requires devices that wake periodically, connect to some sort of network, and then go back to a low power state. After some experimentation it was clear, at the time, that WiFi was a real power hog and wasn’t a likely candidate. However, this realization led me to believe that another router, hub, or coordinator device would be necessary. I recall the effort required to convince homeowners to acquire WiFi routers and looked for an approach that would make this palatable.

I decided the answer was to create a product that homeowners would want to purchase because it excited them, and by the way it contained a network router / coordinator. Once acquired on its own merits, the product would facilitate the inexpensive and simple acquisition of other devices that connect to it. Products worth considering would be interesting to households and would connect to household power:

  • Lamps
  • Televisions or other audio / video (AV) equipment
  • Thermostats
  • Doorbells

Lamps seem simple and boring. However, after implementing my first choice, I know I should have chosen a lamp because it would have been boring, simple, and done! I decided embedding anything in televisions or other AV equipment would require skills and resources I didn’t have. Nest took the thermostat direction and while I disagree on the approach of putting so much technology in a tightly coupled system, I didn’t want everyone to judge my work against a commercially available product. I chose to implement a doorbell because they are ubiquitous, simple and meet my requirements:

  • They are in nearly every U.S. household.
  • Power is available where the indoor ringer is found.
  • They do one thing and no one cares if they do anything else.
  • They are in a good physical location for a network router.
  • They are out of the way, aren’t moved, never unplugged, or inadvertently reconfigured.

I chose to create a doorbell that would function as a replacement doorbell, would act as an IoT network router, and connect this network to the Internet by also connecting to an existing WiFi network. A quick trip to Home Depot revealed that an inexpensive doorbell cost about $13. Even with no experience in product development, I knew I wasn’t going to be able to build a doorbell that also acted as an IoT to WiFi gateway for $13. To be compelling enough to get households to acquire my doorbell it would have to be feature rich:

  • This doorbell would play ringtones uploaded by the user to celebrate seasons, holidays, birthdays, etc.
  • Each time someone rings this doorbell the time and date should be logged.
  • The owner can configure the bell to text them when someone rings.
  • The bell should be easily configured not to make noise when babies are sleeping, pets shouldn’t be disturbed, or the owner just doesn’t want to know you’re there.
  • When the bell is rung it should be configurable to access other Web resources such as APIs, webhooks, etc.
  • The system should be controlled and configured using a mobile app.
  • The doorbell must be a simple replacement of the original doorbell ringer.

While these features increase the likelihood of making it compelling enough to overcome the necessary price point, it certainly eliminates any chance of it being simple.

Summary

In this post I declared my intent to refresh my technical skills through the development of an IoT product, an amazing doorbell, myDoorbell. In the next few posts I will describe how a typical doorbell works, illustrate the general system layout for this new doorbell, describe how to create it so it fits into existing doorbell systems, and discuss many details of the techniques and technologies that make this possible. It will be a fun journey with many twists and turns, but that’s how learning happens!

Domains, Personal APIs, and Portfolios

Introduction

In addition to the traditional educational experience students at Brigham Young University receive, we want them to acquire skills, techniques, and tools that facilitate their current and future learning. We believe students should learn how to control and own their digital identity, content, and personal data. With this goal in mind we have initiated a pilot program using a concept known as Domain of Ones Own. We hope to accomplish several goals using this concept and associated training:

  1. Teach students, faculty, and staff why they should care about owning, controlling, and appropriately sharing their online identity, the content that defines them, and their personal information.
  2. Help individuals understand how to choose a domain name that accurately and professionally represents them to others.
  3. Encourage members of our community to not simply consume, but contribute to the body of knowledge through the use of blogs and social media.
  4. Support individuals in publishing a Personal API (i.e. api.example.com) that allows the owner to authorize others to interact with their personal information and revoke access privileges as desired.
  5. Support students and faculty in creating a portfolio (i.e. api.example.com/portfolio) as part of their Personal API that is owned and maintained by the individual owner, and yet enables the owner to authorize others to consume, contribute to, and evaluate their collection.

Domain of Ones Own

Many members of our community share their pictures, memories, thoughts, insights, and writings on social media sites that are controlled by others. The privacy policies of these sites change over time, access privileges may change, copyright ownership is a concern, and the look and feel desired by the content owner may change without their knowledge, input, or control. Contributors have no control over the amount or type of advertising placed around or even over their content. In many cases they may not be able to easily move their content to other providers, remove content they no longer wish to share, or even pass ownership onto others as desired. We want members of the BYU community to understand that there is a better way.

Consequently, we have chosen to use and teach a concept known as Domain of Ones Own. We first herd about Domain of Ones Own from Jim Groom when he was at the University of Mary Washington. After a visit we were hooked on the idea of freeing our community and using the tool to rethink content ownership, Personal APIs, portfolios, and Learning Management Systems.

Our implementation of a Domain of Ones Own consists of a simple hosted server configured using cPanel and pointed to by the end-user’s chosen domain. We are using the service and tools provided by Reclaim Hosting who provides the tools, hosting, and the process for acquiring domains. With the default, initial configuration domain owners have a blog driven by the Known blogging tool. While this is a great introduction that allows domain owners to contribute immediately, the system is open and can grow as the domain owner’s sophistication increases. The system allows users to set up subdomains, email servers, database servers, and install and run many LAMP stack based applications. The tools and services have been chosen carefully to allow users to move their domain and associated content to other providers easily. Tools were chosen to be immediately useful, provide future flexibility, and help users learn introductory system administration skills that are critical to understanding the world they are in and will inherit.

Domains

We believe every individual should own and control their domain. Choosing an appropriate domain is important. In many cases the domain will be used in a professional capacity for years, perhaps for life. We are creating instructional material, including short video segments, which will give advice on how to choose well. We intend to create these materials in a way that minimizes branding and IP protection so others can easily use them for similar purposes at their institutions.

Personal API and Portfolios

Imagine a world where other sites on the web don’t hold your personal data, but instead request access to the data they need through your Personal API. Perhaps you grant them access to only the portions they actually need and restrict them from others. They use the resources they’ve been authorized to access, perform the business functions you desire, return results, and their access is revoked.

For example, imagine you work for weLovePrivacy.com and it’s payday. The payroll system springs to life and determines how much you should be paid this month. However, it needs to know how much should be withheld for taxes, how much pretax contributions to make, where these should be made, where you want your money deposited, etc. In a traditional system all of this information is centrally held. This centrally held information compels the institution to create systems to enable you to manipulate it, and makes the company liable for any loss of this data. On the other hand, you are depending on the institution safeguarding your personal information and not using it for nefarious purposes, a dangerous assumption.

However, there is a better way. Imagine the payroll system interacts with your Personal API to obtain your social security number, the number of exemptions you are declaring, the name of your 401k vendor, 401k account number, your checking account provider and account number, etc. The institutional system does the computation and disbursements, and your Personal API revokes access to these resources until the next time they are needed. While the institution could store the collected information it may not be in their best interest to do so and could even be released to them with the understanding it is to be used for the sole purpose disclosed to the user.

While it may be a while before ERP administrators are comfortable getting employee data from their personal API, there are plenty of other scenarios where a personal API is useful. Portfolios is an example of such a scenario. An instructor at an institution requests authorization to place assignments into your Personal Portfolio, their request is granted, and the assignments are deposited. You perform learning activities that generate solutions to the assignment, and deposit these in your portfolio. You have authorized the instructor to see them and place their critiques back into your portfolio. Since this is your portfolio it moves with you from one part of your life to another, from one institution to another, etc. It is yours to use and share as you choose.

Summary

It is time for learners to take control of their content, artifacts of education, and personal information. Our desire and intent is to teach these principles to our community and give them the necessary tools. We hope to do so in a way that others can easily use and benefit from.

Freedom via Abstraction

At Brigham Young University (BYU) we have been developing a University API to expose the functionality of a reasonably generic educational institution, while consuming a very specific set of underlying technologies. Our generic institution has instructors, students, courses, classes, and locations. These resources and available HTTP methods are being combined to expose acceptable business processes such as registration, adding and dropping classes, etc. We will continue to add resources and appropriate business processes as necessary to meet our institutional needs.

Our intent is to develop future applications by consuming the University API and will encourage others to do the same. We will no longer consume the user interfaces or APIs of underlying systems. This layer of abstraction will enable us to replace the underlying technologies with new technologies that provide similar functionality. Regardless of the tools or technologies used, those consuming the University API will be unaware of the underlying change. This will give the IT organization the freedom to make changes to reduce cost, modularize monolithic applications, move to microservices, etc. without impacting application developers or end users. This will bring them freedom via abstraction.

I’m writing about this today because in my mind this is an important general architectural pattern that should be followed more often. David Wheeler­­­, a British computer scientist, is credited with saying, “All problems in computer science can be solved by another level of indirection, except of course for the problem of too many indirections.” While most often quoted by programmers in discussions about pointers and similar constructs, I think abstraction layers, like the one discussed above, are perfect examples of additional layers of indirection that help us solve problems.

While APIs make this work easier, the approach is more generally applicable. For example, imagine you have an ERP system that is aging and the thought of living through another ERP transition scares you to death, or at least adds one more reason to consider early retirement. Imagine you add a user interface layer between the existing ERP and its users. This could require consuming an API provided by the ERP vendor, wouldn’t that be awesome, or screen scraping or via other less exciting means. When this is complete the new ERP system can be installed and connected to the user interface developed above. The two systems can be brought to a consistent state and the connected user interface can be used to keep them that way. Transaction responses can be compared until you’re confident in the new system. At this point the old ERP system can be retired. You have transitioned to a new ERP system and the users are unaware, that’s success!

There are two main points I think are worth noting. First, an additional layer of abstraction can free an IT organization to make changes without impacting end-users. Second, end-users shouldn’t use the provided user interfaces of institutionally important applications, but rather be provided with screens and applications we develop on top of APIs we control. Installation of a new application is not complete until an API we control is designed and used to create an abstracted user interface that exposes the desired functionality. When applications are installed using this model, they are more easily replaced. Freedom via abstraction!

Techniques for Teaching Others

Over the past several years I have been a member of several executive leadership teams. While in these environments I have witnessed leaders using several interesting techniques to teach others. I share two of these techniques because I believe others can use them successfully.

The first technique was used by an executive to teach his superiors without “teaching” them. Here’s what happened: The executive called me a few days before a meeting that I was unaware of, and asked me ten to fifteen technical and business questions that were within my area of expertise. I answered the questions, the phone call ended, and I felt good about the interaction. A couple of days later I was invited to a meeting with this individual and three others senior to him. After we dispensed with pleasantries, the first executive steered the conversation in the direction of our previous phone call and began asking me the same questions he had asked a few days earlier. I was a bit puzzled, but simply regurgitated what I had said previously. Since the three other executives considered me expert in this area, they nodded, asked a few questions, drew conclusions, and thanked me for helping them better understand. When the meeting ended, the first executive walked me to the elevator and simply stated, “you just witnessed how you teach your superiors without ‘teaching’ them”. I saw that he had gathered the information he needed prior to the meeting, invited the ‘resident expert’ to talk—already knowing what would be said, and used that expert to drive home principles he felt were needed, but which may not have been accepted if they came directly from him. I remember feeling that not only were his superiors taught, but I was masterfully mentored.

The second technique was used by an executive who was senior to the group being taught, but was not their line leader. In this case I was brought in to make a presentation on a topic I was convinced no one in the room was interested in. At the conclusion of my presentation I asked if there were any questions. The executive asked, “Will you tell us the difference between a need and a want?” Well, my assessment of the interest in my presentation was proved correct, but now my mind raced to come up with something useful to say. I spent five to ten minutes describing our organization’s strategic planning process and how it resulted in us filtering the organizational wants down to real needs. The executive responded by letting me know that I did a bad job of answering his question and asked me to try again. I was invited to try two more times, after which I was told to take a seat. I sat, feeling rejected, and believing that I had completely failed the task given me. At that moment my boss put his arm around me and whispered in my ear, “great job”. I was tempted to respond, “what meeting were you in?”, but resisted.  The meeting wrapped up and the executive and a peer of his thanked me for the great answers. It turns out that my explanation of the strategic planning process was exactly what the executive wanted. I was used to teach these important principles (in three different ways) to other executives without their line leaders being present or needing to be involved. The others could now adopt the techniques discussed as their own, impress their line leaders, and accomplish the organization’s goals. I was used, but I learned a lot and felt great about it!

These two techniques were effective in these specific cases, but are more generally applicable. I hesitate sharing them because it might make them less effective in my career, but I think sharing them for others to use is worth that risk.

The August and Kevo Door Locks

Introduction

I have been waiting for the right technology to enable me to rid myself of my traditional door locks. While I’m not completely satisfied with my current selection, I found two I was willing to try, the August (on the left) and the Kevo (on the right). In this post I’ll discuss both.

Packaging

Let’s face it; Apple has set an amazingly high bar when it comes to product packaging. I love acquiring new Apple products because it is fun to open them and experience the unveiling. The August wins this battle hands down. While probably not a big deal in the end, it sets the initial impression upon which all the rest of the experience is built.

Installation

The Kevo is a Kwikset product and installs just like any other Kwikset deadbolt device. The August on the other hand is very simple and quick to install.

When you install the Kevo lock, you install a new outside key mechanism, a new deadbolt, and a new interior locking mechanism. The box contains all of the components, hardware, and even adapter pieces to adapt the system to various sorts of doors, doorknob holes, etc. It is very complete and only requires you provide a screwdriver and effort. If you are putting these locks on several doors of your home, it comes with the tools and instructions to rekey the exterior key system so a single physical key will open all of the doors.

The August system only replaces the interior portion of your existing deadbolt; this has pros and cons. This characteristic makes the installation a snap. You simply remove the interior portion of your existing lock, attach the August mounting bracket in its place, and slip August onto the bracket. In addition, if you rent or lease a property and the contract stipulates you can’t change the locks, this enables you to have a smart lock and still be in compliance. The downside of this simple interior replacement is that it can cost you. In my application not only was I interested in obtaining smart locks, I was eager to replace my failing existing lock systems. In my scenario I had to purchase August, a new deadbolt system, and then throw the interior portion of the brand new mechanism away. With August costing $250, roughly $60 more than Kevo, this additional expense is significant.

Keys

Physical Keys

Both systems enable you to gain access to the protected premises using traditional physical keys. As mentioned earlier the Kevo comes with new physical keys while the August uses the same exterior key that came with your original lock.

Virtual Keys

The biggest difference between the two locks, in terms of virtual keys, is that those for the Kevo cost $1.99 as an in app purchase, for any beyond the first two, while those for August are free. This is mitigated a bit by the fact that when you purchase a Kevo key and allocate it to a guest, family member, etc. you can reclaim it and reissue it without paying an additional fee. This is much like having additional keys made for your home and temporarily giving them to others.

Virtual keys for the Kevo system are distributed to others via email while those of August are distributed via text message. Shared keys can be adjusted to give guest access for a period of time, for set times and days of each week, or for anytime access. Keys can also be distributed that permit admin access to see entry logs, distribute keys to others, etc. Both systems are very similar in this respect.

Once a key is distributed in the Kevo system it will be active until you delete the individuals access rights. In the August system the key can be revoked or temporarily disabled. This feature might be convenient when you’re having your carpets cleaned, wood floors refinished, etc. and you don’t want others in your home, but you also don’t want to send the message, “we don’t love you anymore”.

Access

To gain access to the premises with the Kevo system you approach the door with your iOS or Android device on your person, with the app installed, and reach out and touch the lock. The LEDs in the lock blink blue and a few seconds later the door is unlocked. When you exit the premises you simply touch the lock again and it locks. This feels very natural and sounds great, but it doesn’t always work. Most of the time the described scenario works, but just as you get in the habit of leaving, touching the lock, and proceeding on your way, you don’t hear the familiar locking sound and you have to return and repeat your effort. Unlocking the door sometimes requires two tries, not often, but sometimes.

There is an alternative scenario. Through a $50 / year subscription to Kevo Plus you can unlock and lock the door remotely as long as you have Internet connectivity. In addition, the Kevo does come with a FOB that enables those without a smart phone to access your home. However, since the system allows the use of a physical key, that is smaller than the FOB, the value of this feature is questionable; kids do love it!

There are two main ways to gain access using the August. The first is by pulling out your phone, opening the app, waiting a few seconds for the app to recognize the lock, and then pressing the unlock button. I wouldn’t mind pulling my phone out to unlock the door, but having to find the app, open it, wait for it to find the lock, and then finally unlocking the door drives me crazy. Thank goodness there is a second way.

In the August app you enable the auto unlock feature and set a radius from the lock on a map. When you leave your home, lock the door, and exit the predefined radius, the app takes note. The app interprets your reentering the predefined radius as an indication that you’re returning home and starts trying to acquire the lock via Bluetooth. When you get within Bluetooth range the lock unlocks. When you get to the door you simply enter. This feature makes the lock usable.

Your existing physical keys still allow you access to the premises; there is no FOB sort of device. For an additional one-time $50 purchase you can buy the August Connect device that connects the lock to your home WiFi and hence the Internet. This is useful because it enables you to lock and unlock the door from anywhere you have Internet connectivity. Instead of sharing keys with other random visitors you can simply open the door for them and lock it when they leave.

Physical Construction

The Kevo feels like any Kwikset deadbolt system, substantial and reasonable quality. The August feels less durable, more plastic, and for my test unit the battery cover periodically falls off. This does not affect the security of the premises, but does feel like you purchased an expensive, but cheap toy.

Family Picks

Thus far my family likes the August better than the Kevo. The Kevo fails them periodically and this does not instill trust; none of us want to carry a physical key for the time it refuses to open. With August Connect you acquire remote locking for a one-time $50 purchase while the same functionality for Kevo is $50 per year. The auto open mode on August makes it useable, without it August would be intolerable to use. While the August costs $60 more then the Kevo you get unlimited free virtual keys and August Connect as a one-time feature. After two years of use the Kevo would cost as much as the August and that cost would continue to increase.

Other Dissatisfactions

Neither of these devices has an open API that would allow them to be connected to other home systems. August supplies a closed API, for their trusted partners, through their August Connect device. I am unaware of an API for the Kevo system.

Both companies take the typical path of requiring the user to acquire an account, install a specialized app, and “caring” for the user’s data, keys, credentials, etc. These closed systems make it difficult to include these locks into the broader and more interesting smart home movement.

Blog Moved to Domain of One’s Own

At Brigham Young University we are experimenting with and piloting a Domain of One’s Own experience for students, faculty, staff, courses, and who knows what other uses we’ll find. To experience this environment I have chosen to move some of my content to my new blog at kelly.flanagan.io with the associated site being hosted by Reclaim Hosting.

The main focus of this experiment is to educate, encourage, and facilitate students in taking control of their digital identity. Instead of placing their content on social media sites where others drive how their content is displayed, what security policies exist, and how long their content persists, we are hoping to give students a place they can call their own and control the way their content is shared with others of their choosing.

However, we also hope to use this environment to implement a personal API for each of our participants. Imagine that when a domain is created and hosted, a subdomain is also created, perhaps api.domain. This URL points to an application implementing an API for the individual. This personal API would have resources pertaining to the individual that would be created, retrieved, updated or deleted using appropriate HTTP methods. These resources would be protected by Oauth, or some other mechanism, allowing the individual the ability to protect their information from others while authorizing those they desire to access it.

In the end, perhaps this sort of architecture will result in institutions, like BYU, not having to hold onto individual personal information, but rather asking students, staff, faculty and others for permission to access the needed information from the individual’s personal API. This would allow individuals to control the use and spread of their information and reduces the amount of personal information the institution needs to protect; as a CIO, I really like that last bit!

There are others working in this area including Kin LaneJim GroomPhil Windley, and others. If you want to participate, learn more, contribute or just listen in, please join us at the next University API and Domains (UAD) conference to be held again in early 2016.

Docker for Cross Compilation

Given my interests in enterprise computing and embedded systems, I decided to mix the two disciplines and use Docker to create a C development environment to generate Arm executable from my OS X based Mac. This may seem like a stretch, but the traditional way to do this is to fire up a full-blown virtual machine, install Linux, install the gcc based cross compiler, edit code on the host machine, switch to the VM, compile, and iterate.  This is slow and cumbersome, and not nearly as educational or fun!

The first step was to install boot2docker and follow the directions. This provided a simple and clean way to get Docker running on OS X.  Next I created a Dockerfile to create a Docker image FROM Ubuntu, loaded all the required modules and configured the image for cross compilation.  Here is my Dockerfile:

##################################################
# This docker file creates an Arm cross compiler
# platform from Ubunto
##################################################
# Ubunto as base image
FROM ubuntu
MAINTAINER J. Kelly Flanagan, Brigham Young University
# update ubuntu image
RUN apt-get update
# enable 32 bit code to run on 64 bit machine
RUN apt-get -y install lib32z1
# install the gcc tools to enable compilation
RUN apt-get -y install gcc build-essential libncurses5-dev mtd-utils u-boot-tools
# ADD cross compiler tools and unpack in appropriate location
# create destination directory
RUN mkdir /opt/codesourcery/
# ADD source
ADD arm* /opt/codesourcery/
ADD mkubifsimage /bin/
RUN chmod 777 /bin/mkubifsimage
# for interactive use set path
RUN echo ‘PATH=/opt/codesourcery/arm-2011.09/bin:$PATH’ >> /root/.bashrc
# set environment variable so I know I’m in a container
ENV ARM_CROSS_COMPILER TRUE
# create build directory where a volume will be mounted
RUN mkdir -p /tmp/arm_cross_compiler
# End Dockerfile

With this Dockerfile I ran the following build command using docker,

docker build -t ubuntu_arm_crosscompiler .

this resulted in a new Docker image called ubuntu_arm_crosscompiler. This image can be used to create an interactive Docker container by executing,

docker run -i -t ubuntu_arm_crosscompiler

This yields a shell where you can invoke gcc to create Arm object and executable files from source. However, I don’t use it interactively; I invoke it from make so that it appears as if I am compiling from my Mac, but end up with the desired Arm executable file.

As an example let’s assume we have a source directory with a Makefile and a few source files: Makefile, test.c, test.h, and testfunc.c.  The Makefile uses conditionals to determine whether it is executing on the host machine or the Docker container. In the Dockerfile the environment variable ARM_CROSS_COMPILER was set and will exist in any container derived from that image. The contents of the Makefile are included below.

# Name: Makefile
# Purpose: Build Arm executable via Docker based cross compiler
# Author: J. Kelly Flanagan
# Docker specific stuff
DOCKER_ARGS = -v $(PWD):/tmp/arm_cross_compiler -w /tmp/arm_cross_compiler
CC=/opt/codesourcery/arm-2011.09/bin/arm-none-linux-gnueabi-gcc
RM=rm -f
HEADERS = test.h
OBJECTS = test.o testfunc.o
TARGET = test
test: $(OBJECTS)
ifeq ($(ARM_CROSS_COMPILER),TRUE)
            $(CC) -o $@ $^ $(CFLAGS)
else
            @docker run $(DOCKER_ARGS) ubuntu_arm_crosscompiler make $@
endif
%.o: %.c $(HEADERS)
ifeq ($(ARM_CROSS_COMPILER),TRUE)

            $(CC) -c -o $@ $

else
            @docker run $(DOCKER_ARGS) ubuntu_arm_crosscompiler make $@
endif
clean:
            $(RM) $(OBJECTS) $(TARGET)

From the source directory we execute make and the first target (test) is invoked. This target invokes the next that checks to see if the header file or the source files are newer than the corresponding object files. If the object file doesn’t exist or the source file has been modified, the action is taken. The action checks to see if the ARM_CROSS_COMPILER environment variable is set. If it is not then the Docker command is executed. When the Docker container is created from the image it mounts the current working directory and executes the same make command that was invoked on the Mac. However, in the container the environment variable is set and the source file is compiled to an object file using the cross compiler. The make in the container now completes and returns to make on the Mac that then moves on to the next target. This is repeated until all object files have been created or updated at which point a final compilation takes place to link the object files to one executable.

Docker works well in this case because it is efficient enough to be invoked and destroyed quickly enough to not be a major contributor to compile time and it is much more convenient than switching back and forth between my host and a VM. Finally, I can share either the Dockerfile or the image with others, enabling them to easily use my tool chain. I’m definitely adding Docker to my standard set of tools for solving problems.

Criminals, Conjecture and Connected Things

My connected devices were initially secured using a hash-based message authentication code (HMAC).  An HMAC is constructed for each HTTP request directed at a device. The HMAC is created by combining some of the HTTP header elements, the time and date, the body of the request and a secret key / passwd.  The output block is then attached to the header of the request and transmitted to the selected device.  Upon reception of the request, the device collects the same header information, the time and date, and the body of the message and computes the HMAC using the same secret key / passwd.  If the computed HMAC and the one  sent with the request match, then the request is fulfilled.

While this protects the device from being compromised and the data being changed, it does nothing to protect the data from being read by prying eyes.  At first glance this seems unimportant since the data, inadvertently or intentionally, viewed by others carries little information.  For example, why would we care if others know the temperature of some sensor in our home? Why would it matter if someone knows the ringtone loaded in to our doorbell? Why would it matter if others know the setting on our thermostat?  Well each on their own might not matter, but with perhaps hundreds of devices in our home we offer up a lot of unprotected data for criminals to conjecture with.

Imagine a criminal mastermind sniffs your unprotected data and learns that your thermostat was just set to 55 degrees and the dog bark ringtone was loaded into your doorbell. In addition, late in the evening the temperature in the home is 55 degrees.  It wouldn’t be difficult to conjecture that no one is home.  In addition, the dog bark ringtone may lead them to believe that your gone for a while.

This simple example is intended to show that while we might erroneously think that this data does not need to be protected, it does!  In addition to protecting the device from intruders using HMAC, the data needs to be encrypted using SSL or other techniques.  By securing both, your connected devices and your data you will reduce the chance of having your virtual and physical spaces compromised by criminals.