Tag Archives: 2014

Icehouse

Openstack Icehouse on home network – part 2

In  my previous post titled OpenStack Icehouse on Fedora 20 using packstack on home PCI list the steps I followed to install Openstack Icehouse on home network.  This was mostly working from the command line of the Openstack PC.

This posting is part 2.  With the installation from part 1, I can now use the Openstack Dashboard.  It is a nice web interface that contains straight forward menus.  In the sections below, I will show screen captures from a web browser connected to the Dashboard; I will also show some command line text from a putty window.

This writeup will show steps for setting up a public subnet that links Openstack to my home network and setting up a private subnet for my guest instances to use.  Then I will show how to create an instance using the bare-bones cirros image.  And finally, I’ll install into the image repository a Fedora 20 cloud image and spin it up with a basic web server.

NetworkTopology

Since Openstack and the Openstack Dashboard are new to me,  I do all of this with the generous help of the references listed below, especially recognizing Seth Jenning’s excellent  Openstack Icehouse on Fedora 20 using RDO video.

Create a public network.

Starting with the Openstack Dashboard, logged in as admin, create a public network.  My home network is 192.168.100.0/24.  The IP address 192.168.100.163 is the address of my home router, gateway to the internet. My Openstack Icehouse host is 192.168.100.154.  And Openstack needs a subnet, referred to as public, that sits in this address range.

The naming convention of calling the network 192.168.100.0/24 public started with the packstack install scripts.  In the context of Openstack, this network is the one with a gateway to the internet and thus it is referred to as public, even though we know the internet defines 192.168.x.x IP addresses as private.

Openstack Dashboard Menu: Admin->Networks->Create Network

AdminNetworksCreateNetwork081514

Openstack Dashboard Menu: Admin->Networks->public->Create Subnet

AdminNetworksPublicCreateSubnet1

Openstack Dashboard Menu: Admin->Networks->public->Create Subnet 2

AdminNetworksPublicCreateSubnet2

Create a private subnet

Next, create  a subnet that is private to the Openstack host.  The addresses must be different from the public_subnet.  These addresses will never leave the Openstack host and it’s underlying Open vSwitch network address space.   The references use 10.0.0.0/24 as the network and 10.0.0.1 as the gateway address — and that’s what I use below.  Further, the guest instances each need to be given an IP address, and 10.0.0.2, 10.0.0.20 address are what I choose to be the range for a DHCP address pool.

Openstack Dashboard Menu: Project->Network->Network Topology->Create Network — “private”

ProjectPublicNetworktopologyCreateNetwork1

Openstack Dashboard Menu: Project->Network->Network Topology->Create Network->Subnet

ProjectNetworktopologyCreateNetwork2

Openstack Dashboard Menu: Project->Network->Network Topology->Create Network->Subnet Details

ProjectNetworktopologyCreateNetwork3

Create a router

Ok, there’s a public and a private subnet defined.  Openstack Dashboard has a really simple way to connect them together with a router function.  Create a router, define a default gateway and then add interfaces to private subnets.

Openstack Dashboard Menu: Projects->Network Topology->Create Router

ProjectNetworktopologyCreateRouter

Openstack Dashboard Menu: Project->Routers->Set Gateway

PorjectsRoutersSetGateway

Openstack Dashboard Menu: Project->Routers->router->Add Interface

ProjectRoutersRouterAddInterface

Verify Network Topology

All the steps upto this point were building a network into which virtual machines connect to the home LAN.  Run the Dashboard command below to see two subnets connect to a router.  The public subnet is on the home LAN.  The private subnet is the address space where the guests instances will connect.

Openstack Dashboard Menu: Project->Network Topology

ProjectsNetworkTopology

Setup Security Group Rules

The references suggest that for trial/learning purposes, the Security Group Rules should be wide open.  The idea is while learning the technology, the security settings can obscure basic setup issues.  In the long run, this needs to be managed more carefully.

First, remove the default rules that packstack setup, then install rules that permit incoming and outgoing TCP/UDP/ICMP — all ports.

Openstack Dashboard Menu: Project->Compute->Access and Security->default

Delete all the default rules and rebuild the rules so that the Security Group Rules table looks as follows:

ProjectComputeAccessSecurityDefaultAddRules

Setup an ssh key pair

Using the normal ssh tools, make an ssh public and private key pair.  The Openstack Dashboard lets you cut / paste your own public key into the project.  The instances that get created will have the  public key pre-installed into it.  To access instances that Openstack creates, use the private key as an option in an ssh command line.

Go to the root login of the Openstack host, at the command prompt create a key and copy the public key into the clipboard.

[root@kozik4 ~]# ssh-keygen -t rsa -f cloud.key
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in cloud.key.
Your public key has been saved in cloud.key.pub.
The key fingerprint is:
36:37:4d:d3:5b:82:45:53:48:98:24:00:8b:0b:3b:06 [email protected]
The key's randomart image is:
+--[ RSA 2048]----+
|      .......==o.|
|     . .   .o+.. |
|E . . .     + o .|
| . o .     o . + |
|  + .   S o . .  |
| . .   . o .     |
|                 |
|                 |
|                 |
+-----------------+
[root@kozik4 ~]# ls
anaconda-ks.cfg  ifconfig5.out         packstack-answers-20140803-201418.txt
cloud.key        installpackstack.log  packstack.log
cloud.key.pub    keystonerc_admin      rdorelease.log
ifconfig1.out    keystonerc_demo       runpackstack.log
ifconfig2.out    ovs1.out              yumupdate.log
ifconfig3.out    ovs2.out
ifconfig4.out    ovs3.out
[root@kozik4 ~]# cat cloud.key.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDijKn/k5ejNii3SaNugO75Njz1LQHyDDwI5blZO4+CZTRL/O/8czffrUUfK8+j3QjAx7MByNJVkj8YCGtOAYv5wCFEhzkRhqNNJlH235L++QV6ai/XPD7b0VcqhCjQTkDIfyBMp7fZO+D0BGdvTjBiQXIJdZLqZWV2j9qH8EHHS55OlOXpAAcMHvRRgWFtMdn5YSLUcq8X5HRtvfesLL7quJmNDc8/rS6mhmL/NFU56r+SJpHvr7N59U7ywNejLgFp6hfz4zZw3nWDH9y+by1zdWbNfATIO362SRue+FvuF060ss4Ciesuqw5v3tJMeyq9JM41lu8fQaIeBqoJTB43 [email protected]
[root@kozik4 ~]#

In putty screen like the above example, select the text output from the ‘cat cloud.key.pub’ command and paste into the Openstack Dashboard as follows.

Openstack Dashboard Menu: Project->Access & Security->Key Pairs->Import Key Pair

ProjectAccessSecurityImportKeyPair2

Launch Test Instance ‘cirros’ and assign floating IP

To help verify that the Openstack packstack installed correctly, spin-up the barebones cirros image.  This image is a really small linux distribution.  I’ve never heard of cirros, but I get the purpose of it.  My initial setup had troubles, and cirros helped me trouble shoot basic setup problems.  I was glad the initial install pulled it in.

The following steps startup an instance, link it to the private subnet, and map the private IP address of the instance to a floating IP address on the public subnet.  Floating IP addresses were new to me, and it wasn’t obvious how they should be used, at first.  I think of it as a generalized NATing function, that lets me hide my home network topology from the Openstack instances.

Openstack Dashboard Menu: Project->Images->Launch->Details

ProjectImagesLaunchDetails

Connect the instance to the private subnet.  Note: the web page below requires you to drag the private line and drop it into the Selected Networks cyan-colored bar.

Openstack Dashboard Menu: Project->Images->Launch->Networking

ProjectImagesLaunchNetworking

Verify the test instance is running:

Openstack Dashboard Menu: Project->Instances

ProjectInstances

Allocate a Floating IP address

Openstack Dashboard Menu: Projects->Access & Security->Floating IPs->Allocate IP to Project

ProjectAccessSecurityFloatingIPAllocate

Associate the instance’s private IP address to an IP address on the public subnet.

Openstack Dashboard Menu: Project->Access & Security->Floating IP->Associate

ProjectAccessSecurityFloatingIPManageFloatingAssociations

Verify that the instance has two IP addresses and is running ok.

Openstack Dashboard Menu: Project->Instance

ProjectInstanceswIP

Go back to the 192.168.100.154 host putty prompt (root login, home directory).  Verify that we can setup an ssh connection to the new instance. The default login id is cirros. The Instances web page above tells us to use the 192.168.100.101.

# ssh -i cloud.key [email protected]
The authenticity of host '192.168.100.101 (192.168.100.101)' can't be established.
RSA key fingerprint is 34:51:4c:22:c3:67:d3:47:38:83:c2:ee:55:0f:4b:e5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.100.101' (RSA) to the list of known hosts.

$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:62:7D:36
          inet addr:10.0.0.3  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe62:7d36/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1

$ ping yahoo.com
PING yahoo.com (98.139.183.24): 56 data bytes
64 bytes from 98.139.183.24: seq=0 ttl=47 time=49.484 ms
64 bytes from 98.139.183.24: seq=1 ttl=47 time=121.366 ms
64 bytes from 98.139.183.24: seq=2 ttl=47 time=81.164 ms

Note: the ifconfig shows that the instance only knows about the private_subnet address 10.0.0.3. Also, an important test to verify: check that the instance can access the internet, I used ping yahoo.com.

Create a Fedora 20 instance

f20-changeThe cirros instance installation steps above helped to verify that basic functionality worked.  But cirros is not a linux distribution I want to use; I want to use the latest version of Fedora.  In this section I repeat some of the steps from the previous section to get a Fedora 20 instance started.  There’s enough different here that I wanted to document it.

From the Fedora In the Cloud web page, right click the 64-bit qcow2 image and “Copy Link Address.”  The Images page has an option to import new images using a URL.

FedoraProjectCloud

Create an image from this URL:

Openstack Dashboard Menu: Project->Create Image

ProjectImageCreateImage

Following the same steps as the cirros image, launch the Fedora 20 image.

Openstack Dashboard Menu: Project->Instance->Launch Instance

ProjectInstancesLaunchInstanceFedora

Be sure to click the Networking tab and select private subnet, then click on Launch.

Once the instance is running, allocate a Floating IP.  Go to the Projects->Access & Security->Floating IP menu. First run Allocate IP to Project then Manage Floating IP Associations for the Fedora 20 instance… just like we did for the cirros instance.

The Instance Dashboard page now shows 2 instances.

Openstack Dashboard Menu: Project->Instance

ProjectInstanceswFedora

And the Network Topology page gives a nice picture of how everything is wired together.

Openstack Dashboard Menu:  Project->Network->Network Topology

ProjectNetworkNetworkTopologyFedora

So, just like with cirros, go to the Openstack host root login  prompt and ssh to the Fedora instance.  The Instance page above show 192.168.100.102 as the IP address.

# ssh -i cloud.key [email protected]
The authenticity of host '192.168.100.102 (192.168.100.102)' can't be established.
RSA key fingerprint is 2e:b3:7b:6b:06:43:cf:d5:95:95:49:38:5f:ab:20:39.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.100.102' (RSA) to the list of known hosts.
[fedora@fedora-20 ~]$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.4  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::f816:3eff:fe9b:baf5  prefixlen 64  scopeid 0x20																					
        ether fa:16:3e:9b:ba:f5  txqueuelen 1000  (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)

[fedora@fedora-20 ~]$ ping yahoo.com
PING yahoo.com (206.190.36.45) 56(84) bytes of data.
64 bytes from ir1.fp.vip.gq1.yahoo.com (206.190.36.45): icmp_seq=1 ttl=49 time=1         06 ms
64 bytes from ir1.fp.vip.gq1.yahoo.com (206.190.36.45): icmp_seq=2 ttl=49 time=1         56 ms
64 bytes from ir1.fp.vip.gq1.yahoo.com (206.190.36.45): icmp_seq=3 ttl=49 time=1         02 ms

Note:  the Fedora guest instance only knows the private address 10.0.0.4.  The default login id, fedora, has sudo permissions and access to the root login is done with a ‘sudo su -’ command. It is important also to verify that it can talk to the outside world, and thus verify that ping yahoo.com works.

To further test my Fedora 20 instance, I switched over to the Fedora 20 instance root login and installed apache:

[fedora@fedora-20 ~]$ sudo su -
[root@fedora-20 ~]# yum groupinstall "Web Server"
[root@fedora-20 ~]# systemctl enable httpd.service
[root@fedora-20 ~]# systemctl start  httpd.service

From here, I go to another PC in my home network and verify that the default web server works.

ApacheDefaultWebPage

References

2014-s7-lg

Audi Connected Car – 2014 S7

My Audi Connected Car is a 2014 S7.  I got it the beginning of June, and I have been learning how to use the Audi Connect service.

TMOSIM2a

SIM Card

In my Audi S7, it has a T-Mobile SIM card, connecting it to the T-Mobile US 3G/UMTS network – the first 6 months of usage is free.  I have turned on the Audi Connect service, linked it with my Google account, linked it with the iPhone App.  I even turned on the Wifi hotspot in my car.  It all works pretty good.  My T-Mobile service is data-only — I see no way to use the TMO voice or text features.

Here are my notes.

Audi Connect iPhone App

AudiConnectiPhoneLogoCrop082614

The Audi S7 has a simple iPhone App.  Two useful features:

  • Online Destinations. I can enter addresses onto an Online Destinations form; these addresses will show-up on my Audi S7′s navigation menu.
  • Car Finder.  I setup the Audi S7 to report it’s GPS coordinates when ever the car shuts off.  The Car Finder feature lets me find where my car is parked.  This is a nice feature, but a little gimmicky for me. You could also find vin for free here.
CarFinderShrink082614

Car Finder App

Google Maps – “Save to Car”

The Audi Connect service also ties into Google Maps.  When I registered my google login with Audi Connect (my.audi.com, Services -> Dest From Google Maps), Google maps added an option to let me save a map address to my car.  Here’s a Google Maps screen shot:

GoogleMapsSaveToCar082614

http://maps.google.com

So if I am going to visit some place new to me, I can look up the address on Google Maps, save it with the “Send to Car” option and have the address show on a menu in the Audi S7 Navigation menu.  Really cool.  I like this feature.

I can erase saved destinations using the mobile app.

myAudi Portal – my.audi.com

To setup the Audi Connected Car service, I got a login on the myAudi portal, my.audi.com. The portal helped me setup the Destination From Google Maps and Destination Input Services.  The portal required me to enter my VIN number, and it gave me a key to use when setting up the car.

Here’s a snapshot from the portal:

MyAudiCom082714

http://my.audi.com

 

 

Audi Connect

audiconnect091714 The MyAudiConnect website links the Audi’s TMO service to the myAudi service.  The key parameters are the 19-20 characters printed on the SIM card (technically known as an ICCID — see the SIM wiki) and the car’s 17 character VIN number.     The Audi Connect Car uses T-Mobile.  The My Audi Connect portal’s drop-down menu also lets you choose AT&T.

As near as I can tell, I will never get a bill directly from T-Mobile.  It appears to come from Audi — but I am sure T-Mobile is getting paid.

https://myaudiconnect.com

https://myaudiconnect.com

The screenshot above indicates that my Account is active, but my Audi Connect and Wireless Networks are inactive — that’s correct, the car is parked, off, in my driveway. Also, it is hard to read, but the two airbrushed-out fields above are the car’s VIN number and the T-Mobile SIM card number.   Sorry if I got carried away with the red stuff.

Dashboard View

The view from the dashboard in the car is pretty intuitive.  When in the Navigation mode, the following screen displays when you selection “Destinations”

AudiConnectedCarDashboarda090514

Navigation->Destination->Online destinations

The display shows an “Online destination” option and from there, the destinations that you set using the Google Maps “Send to Car” option are found.

A nice feature of the Audi Connected Car is the navigation  view superimposes satellite maps from Google Maps, as seen below:

SatelliteMap

Navigation Map w/Satellite View

Some additional Google Maps features of note:

  • Google Search.  The traditional in-car Navigation systems have a very good database of point-of-interests and street address.  The Audi Connected Car lets you use a direct Google search to find nearby or destination-city destinations.
  • The Google Maps display settings let you put Places, Businesses, Panoramio pictures, or even pointers to WIKipedia

Below are screen shots directly from Audi Connect information services.  First there’s a menu of simple services (not linked with Google), and below that is a screen capture of a radar weather map.  The content is formatted special for Audi and is not real websurfing.

AudiConnectInfoMenu

Menu->Info->Audi Connect

 

OnlineWeatherInformation

Menu->Info->Audi Connect->Weather Info

SIM Card Slot

The car’s dashboard has a slot, hidden by a folding door, where the SIM card slot and two SD card slots are found.  Note:  only the SIM card is plugged in for the picture below.

SIM Card Slot

SIM Card Slot

In Car WIFI

The car lets you setup a WIFI hotspot that routes through the TMO UMTS/3G data service.  I tried it; it worked ok, but it’s not as fast as our 4G/LTE data service and I am not really sure why I’d ever want to use this — maybe for a guest?

Anyway, I ran a speedtest on the WIFI and for my neighborhood, the service is 1Meg down / 0.5 up.  Here’s a screenshot from my mobile:

SpeedTest

 

References

nodejslogo-light

REST web service using Node, was originally written in PHP

Background on my REST web service using Node.js: One of my REST web service applications, originally written in PHP several years ago, needed to be cleaned up. I wanted to re-factored it to make it easier for me to maintain.   I had picked a PHP framework, and I was going to re-code.

For my own education, I decided to use this project as a trigger to learn Node.js.

I had been following the progress of the node community and some of the organizations at work had started using node, so I decided to give it a try. I had been using JavaScript for quite awhile doing front end programming in my browser applications and felt that it shouldn’t be too hard for me to learn.

The notes that follow capture some of my learnings and record some of the steps I followed to design and setup my new web service.

Install Node.js on my Fedora Server

I am a system admin of my own Fedora server, and I was looking to do a full install of nodejs. If you have a new Fedora sever (Fedora 18+), you just use “yum install nodejs” — but I have an older Fedora server so I had to dig around;  I found several useful Howto’s. What worked best for me was the following steps:

From root login:


$ cd /usr/local
$ wget http://nodejs.org/dist/v0.10.25/node-v0.10.25.tar.gz
$ tar zxvf node-v0.10.25.tar.gz
$ cd node-v0.10.25
$ ./configure
$ make
$ make install

Source: Ask Fedora Project

I then put node into default login profile:

$ cd /etc/profile.d
$ vi node.sh
#insert following lines
export PATH=/usr/local/bin:$PATH
export NODE_PATH=/usr/local/lib/node_modules
ZZ

From the root home directory and from my user log in id, I verified that node worked:

$ node -v
v0.10.25

The in my user account, I wrote a hello world style test application:

$ cd $HOME
$ mkdir helloworld
$ cd helloworld
$ vi app.js
// Insert the following files
// I forget which howto website I borrowed this from
var http = require('http');
var server = http.createServer(function(req, res) {
  console.log("web page accessed");
  res.writeHead(200);
  res.end('Hello Http');
});
server.listen(8000);
ZZ
$ node app.js

Note: the app.js is listening on port 8000. To make this work on my server, I needed to go into iptables and open port 8000. I am not going to show how to do that here, but it is easy to do and easy to overlook. It is worth the trouble to get this little app working because you’ll need port 8000 working for the next step.

Once port 8000 is open and the app.js script running, do the following command in a different terminal  window.

From user login:

$ curl -i http://localhost:8000

Output:

HTTP/1.1 200 OK
Date: Thu, 13 Feb 2014 16:27:00 GMT
Connection: keep-alive
Transfer-Encoding: chunked

Hello Http 

This turned out to be harder than I thought. The nodejs website offers Linux Binaries; I tried them and they didn’t work. I assume I did something wrong. Instead, I did the install from source setup.  On my machine, it took about 10 minutes to compile and build. (The make step was longer than I thought).

Restify Module

The node.js community has a large library of packaged modules. I picked the Restify module to build my package upon.  Since I am new to node, I don’t really know what the best choice would be, I just wanted to leverage something that had some good howto articles.

To setup restify, I ran the following steps from my root login:

$ npm install restify -g

Then from my development login, I wrote a barebones test application:

$ cd $HOME
$ mkdir restify
$ cd restify
$ vi app.js
// insert the following lines
// borrowed from stack overflow article:
// http://stackoverflow.com/questions/17589178/why-should-i-use-restify
var restify = require('restify');
var server = restify.createServer();
function respond(req, res, next) {
    res.send('hello ' + req.params.name);
}
server.get('/hello/:name', respond);
server.listen(8000, function() {
    console.log('Listening on port 8000');
});
ZZ
$ node app.js

Like the previous hello world test, this one verifies that basic restify works. I ran “node app.js” in one terminal window and in the other, I ran the following:


$ curl -i http://localhost:8000/hello/jack

Output:

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 12
Date: Thu, 13 Feb 2014 17:03:19 GMT
Connection: keep-alive

hello jack 

I make this all look so easy, but along the way I had several gotcha type problems. I originally didn’t have the environment variables setup right. And, my initial installation of restify, I forgot to use the -g option. And for my initial setup, I did a couple of things in my root login that should have been done in my user login, etc. It is worth it to get the basic hello world done before you start.

Reference:  Creating a basic node.js api with restify and save by Dom Udall

My REST Design

RESTbooksMy web application edits and reports on some resources stored on my server. I want to grant access to these resource through a REST-ish API. The two key resources are charts and jefiles — it turns out that these are actually text files stored on my server. There is one chart file per year, so in my REST URI  I use the notation: chart/YY. The jefile files are monthly, with 12 months per year, so I picked the notation jefile/YY/MM.  In my REST URIs, MM=00-12 and YY is any two digits representing a year (eg YY==12 implies 2012, YY==98 implies 1998).

So, my design is for the following “GET” operations return data from the server to my web application:

# returns contents of chart file for the year YY, 
# where YY is 13 for 2013
http://localhost:8000/chart/YY 

# returns contents of jefile for the year YY, Month MM
# - where YY is 13 for 2013
# - where MM is 01-12, for a month
http://localhost:8000/jefile/YY/MM  

# returns concatenation of all jefiles for the year YY
# - where YY is 13 for 2013 (used for year to date reports)
http://localhost:8000/jefile/YY

Also, for debugging purposes, I want both plain text and json formatted data returned. I picked the above syntax to represent plain text response. Here’s the syntax for json responses:

http://localhost:8000/chart.json/YY
http://localhost:8000/jefile.json/YY/MM
http://localhost:8000/jefile.json/YY

I have experimented with using MIME types or other header mechanisms to control plain text vs json. I like this method because it lets me debug in any a normal browser; no special REST browser plugins needed to be installed. This approach is also valuable to me because it maintains continuity with my existing, several year old, PHP design approach.

So with this design, here’s the basic restify application.

From the file app.js:

var restify = require('restify');

var server = restify.createServer();
server
  // Allow the use of POST
  .use(restify.fullResponse())
  // Maps req.body to req.params so there is no switching between them
  .use(restify.bodyParser());

server.get('/chart/:YY', getChart);
server.get('/chart.json/:YY', getChart);

server.get('/jefile/:YY', getJefileForYY);
server.get('/jefile.json/:YY', getJefileForYY);

server.get('/jefile/:YY/:MM', getJefile);
server.get('/jefile.json/:YY/:MM', getJefile);
server.put('/jefile/:YY/:MM', putJefile);

server.listen(8000, function() {
    console.log('Listening on port 8000');
});

The server.get() callback functions (eg getJefile() ) all follow a common pattern:

function getJeFile(req, res, next) {
    console.log(params.req.YY);
    console.log(req.route.name);
    ...; 
    res.send(...);
    next();
}

The restify framework’s server.get() function takes the incoming REST request, parses the URI and matches it to the first parameter of the server.get() function,  and runs the callback  function with the request neatly parsed into the object req. Since in REST APIs, the GET method nominally has an empty message body, the only real input are the parameters parsed out of the URL. In my code above :YY and :MM ids are the names of the URI parameters.

These parameters are parsed by restify and stored  in the object params.req. For all of my server functions, I always have params.req.YY as in input. Restify API document covers this, but again, this design pattern is found throughout nodejs code as a way for a chain of functions to all share the same input and output objects.

The callback functions above (eg getJeFile()) generates the content that gets stored into the response object, the 2nd callback parameter res. When the callback function is complete, it calls next(), the 3rd parameter.

So here’s what my 3 server.get() callback functions look like.

From app.js:


getChart = function getChart(req, res, next) {
    console.log(req.route.name);
    var year = req.params.YY;
    // validate year
    // get contents of chart file
    // if I find any errors:  return next( Error object )
    res.send(chartFile);
    return next();
}

getJefile = function getJefile(req, res, next) {
    console.log(req.route.name);
    var year = req.params.YY;
    var month = req.params.MM;
    // validate year, month
    // read contents of jefile file
    // if I find any errors:  return next( Error object )
    res.send(jefileFile);
    return next();
}

getJefileForYY = function getJefileForYY(req, res, next) {
    console.log(req.route.name);
    var year = req.params.YY;
    // validate year
    // get contents of all of the jefile files, concatenate
    // If I find any errors:  return next( Error object )
    res.send(jefileFileForYY);
    return next();
}

Note:  the above code is simplified to better help explain what I am doing.

For the case where my web application wants to write content to the server, I use the REST PUT method. The PUT method writes or overwrites the full contents of a resource. Since I am working with whole files on the web server, PUT is appropriate. POST would be OK, if I was adding blocks of data to an existing file; but I am not.

So to make a PUT work in the restify framework, my server.put() callback function needs to parse and validate the message body, convert it from JSON to a native format then save the contents to a file on the server. Restify makes this pretty easy. Here’s the basics:

From app.js:


putJefile = function putJefile(req, res, next) {
    console.log(req.params);
    console.log(req.route.name);
    var year = req.params.YY;
    var month = req.params.MM;
    // validate year
    // validate requerst body, named jefilejson, is valid
    jefileFile = JSON.pars(req.params.jefilejson)
    // write contents of jefileFile to server, directory YY, file-MM
    res.send(200)
    return next();
}

Note: the message body holds a json object; in my case, the web application names the json object jefilejson. So to access the contents of the message body, you access req.params.jefilejson.

Validating the MM and YY ids

The server.get() routing function’s syntax makes it easy to match roughly correctly formatted URIs, but for me, it didn’t have enough checking power to verify that the YY and MM ids were correct. Server.get() just matches any character between the slashes; it doesn’t check that YY is 2 numeric digits or that MM is 01-12 only with the leading zero required. I had to add my own validation logic in each of my callback functions.

It turns out the code for verifying YY and MM ids was several lines of coded repeated multiple times. Node.js and restify design patterns give you the option of inserting additional callback functions into the server.get() function call. I added validation functions as follows:

From app.js:


server.get('/chart/:YY', validateYY, getChart);
server.get('/chart.json/:YY', validateYY, getChart);

server.get('/jefile/:YY', validateYY, getJefileForYY);
server.get('/jefile.json/:YY', validateYY, getJefileForYY);

server.get('/jefile/:YY/:MM', validateYY, validateMM, getJefile);
server.get('/jefile.json/:YY/:MM', validateYY, validateMM, getJefile);
server.put('/jefile/:YY/:MM', validateYY, validateMM, putJefile);

Now, I liked this because it let me re-factor a bunch of common code out of each of my callback functions. I have seen other node.js code snippets that take advantage of this coding pattern.

Validation Functions and Error Handling.

Within a nodejs or restify program, there’s a common way of handling errors. Since everything is asynchronous and chained together with next() functions, there needs to be a way to break the chain on error conditions. The key technique is to call the next() function with an Error object as a parameter. Let me show how I coded the validateYY() function.

From app.js:


/*
** Validate YY id.
** -- syntax:  2 digits, 00-99
** -- directory nfroot+YY must exist
*/
function validateYY(req, res, next) {
    var ret = 500;
    var year = req.params.YY;
    console.log('validateYY:'+req.route.name);
    var isValidYY = year.match(/^\d\d$/);
    if (!isValidYY) {
        ret =  new Error("The field YY-'"+year+"' is not valid.  YY is a two digit year where YY is 00-99");
        ret.statusCode = 400;
        return next(ret);
    }
    /* Verify that YY exists */
    if (!fs.existsSync(nfroot+year)) {
        ret = new Error("The field YY-'"+year+"' is not valid. The diretory-'"+nfroot+year+"' doesn't exist on server.");
        ret.statusCode = 404;
        return next(ret);
    }
    /* Verify that YY is a directory */
    if (!fs.lstatSync(nfroot+year).isDirectory()) {
        ret = new Error("Server side problem:  The path -'"+nfroot+year+"' is a file, but must be a directory.");
        ret.statusCode = 500;
        return next(ret);
    }
    return next();
}

The key is the Error object. When I first saw code snippets using the Error object, I didn’t really know what’s going on. I Googled it and found there’s a rich design pattern built into node.js around how errors are handled. If you just jump into node.js like me, there’s lots of things you need to learn. Restify just leverages Error, assuming coders already know how to work node.js.

Notice in the code above, I tried really hard to follow best practices for mapping http error codes into my REST API definitions. For the case of  someone trying to access a resource like a particular jefile file within a year (eg ..jefile/YY/MM): If the syntax was wrong, I returned a Bad Format (400) error; if the format of the REST URI was right but the file requested file didn’t exist I returned a Not Found (404) error. Also, I even verified that YY in the path was really a directory on the server; if not, that’s shouldn’t ever happen — I returned a Server Error (500).

The way the function chaining works: if next() is passed an Error object, the chaining stops and a json formated response is generated with Error information. This powerful construct is built into node.js, but if you didn’t know it, you might start inventing it yourself, especially if you come from a PHP background, like me.

Splitting Node applications into multiple files – Module Loading with Require

When I first started my node project, I put everything into one big app.js file. This got too big and clumsy for me. I am used to working on projects where coding is split across multiple source files. The node documentation defines a method for split source code across multiple files…  similar to how multiple modules are handled in web browser applications.

I decided to dedicate a source file to each of my main REST resources. For this writeup, those would be chart.js and jefile.js. I still have a main app.js. The way to link them together is to use the require() functions at the top of each file:

Contents of app.js:


var restify = require('restify');
var chart = require('./chart');
var jefile = require('./jefile');
...
server.get('/chart/:YY', validateYY, chart.getChart);
server.get('/chart.json/:YY', validateYY, chart.getChart);
server.get('/jefile/:YY/:MM',validateYY,validateMM, jefile.getJefile);
server.get('/jefile.json/:YY/:MM',validateYY,validateMM,
             jefile.getJefile);
server.put('/jefile/:YY/:MM',validateYY,validateMM,jefile.putJefile);

Contents of jefile.js:


exports.putJefile = function putJefile(req, res, next) { ... }
exports.getJefile = function getJefile(req, res, next) { ... }

Contents of chart.js:


exports.getChart = function getChart(req, res, next) { ... }

Note a few things: the require() function for restify does not need a path. I installed the restify NPM globally (-g option).  The require() function is only needed in the main file, app.js. The restify routing functions (eg server.get()) references callback functions from the other source files, by prefixing with chart or jefile. And, the source code for those callbacks are written as exports (eg. exports.getJefile = …).

Note how I managed, syntactically, my REST GET API methods for returning plain or JSON responses.  ../chart/YY returns the contents of the chart file for year YY; ../chart.json/YY returns the same file, but in JSON format.  Visually, I would rather see the contents of the chart file in native format, but for web page development, I want all responses in JSON format.  The server.get() callback for both of these cases is getChart().  The way the getChart() function knows to return json or native format is by looking at the req.route.name parameter; that tells me how the getChart() was called.

The way source files are modularized is a pattern is written about in the nodejs documentation, but it turns out there are many more advanced ways of managing modules. This one worked great for me. It also made it easier for me to debug and test each resource without wrecking, accidentally, the others.

Reference: Node API Docs: Modules 

Run Node and Apache both on Port 80

As I get my REST web service using Node working, I need to integrate it into the web service infrastructure already installed on my server.  Port 80 today runs Apache serving  blogs, weather stations,  and a couple of web applications — all intertwined with Apache and I cannot just start over.

So I found a way to get Apache to work with node; I keep node on port 8000 and Apache on port 80 and configure an Apache “ProxyPass” for URLs to the node-based REST service.

In /etc/httpd/conf/httpd:

<VirtualHost 64.53.181.XXX 192.168.100.XXX >
    ServerName mydomain.com
    ...
    ProxyPass /node http://localhost:8000/
    ...
</VirtualHost >

Any URI with mydomain.com/node in it gets redirected to my node service. This is good, but part of the reason for moving to node is that it is supposed to give much improved performance — not that Apache is slow, but because it has years of historic features built into it that take longer to execute.

Well someday, I’ll run only node. But for now, for me, node must coexist and this method works for me.

Conclusions

Working with Javascript on the server side was nice.  For me, the debugging skills that I use for web page development carried over nicely to node development.  Nodejs’s asynchronous design pattern was new to me, but once I caught on, it wasn’t bad.  The way nodejs standardizes error handling is much better than the adhoc approaches that I have been using with PHP.

I found that I missed some of the PHP built in functions.  For example, to access files on the server, nodejs has a very robust File System module.  It behaves differently from PHP, but I was able to figure it out — they are all based on the same underlying C libraries.  I resisted the temptation to use one of the frameworks that help bridge PHP developers into nodejs; they aren’t bad, but they worked against my objective to learn.

Some next steps:  I would like to do some performance comparisons.  Superficially, my REST API re-written in node is faster than the existing PHP implementation.  I’d like to quantify.  Also, I have written server side code, that I ought to be able to use inside the web client applications; that would be a nice thing to clean up.  Also, I’d like to get my node application to do logging at least equal to the logging I get from Apache — restify has some nice built in features for that.

There’s a big library of NPM modules out there. I’d like to learn more about them. This time around, all I really needed was restify. Maybe next time, I’ll try to use more of the features inside of it, or branch out and learn express. Anyway, thanks to the greater nodejs community for helping me get started.

nodejs-1024x768

Cable Car Turntable

Nob Hill Walk Cable Car Loop — San Francisco

Background on my Nob Hill Walk:  I was in San Francisco on business.  My hotel and meetings were on Nob Hill.  On my first day of meetings, I did an early morning walk and a late evening Cable Car excursion, looping the city.  A beautiful day.

Nob Hill Walk – Down California, Cable Car Back

California StreetNewhall Building

I was in San Francisco for business meetings. Early in the morning on the second day of my visit I decided to go for a walk from my hotel on Nob Hill heading East down California Street.  My walk ended at Market Street; from there, I caught the California Line Cable Car to take me back.

For this nice little walk, I walked past buildings mostly in the Financial District:  Transamerica Pyramid, 650 California Street / Hartford Building, Old St. Mary’s Church, Omni San Francisco Hotel, Alvinza Hayward Building, Union Bank / Bank of California, and the terraced gardens of the 101 California Building.

The cable car terminus at California and Market/Drumm Streets was all torn up.  At first, I thought I couldn’t board here because of the construction, but after awhile several people started queuing up so I knew this was still a good pick up point. The workers there were making sure that people were queuing for the cable car in a safe spot.

Heading back, I saw the Sing Fat Building, Mark Hopkins Hotel, Ritz-Carlton, Fairmont Hotel, and the Pacific-Union Club — Nob Hill is a nice place to visit!

Cable Car Ride: Union Square on Powell/Mason line to Embarcadero Streetcar South to California Line back to Nob Hill

Bay Bridge

After a full day of meetings and dinner, I took a for-fun ride on the cable cars.

I started at Union Square, hopping on the Powell/Mason Cable car on Market Street heading North all the way up to Fisherman’s Wharf.  I briefly walked around and then caught the Embarcadero Streetcar and road South along the F-Market & Wharves line to the Ferry Building at Market Street.

I walked around the area, took some pictures of the Bay Bridge, had dinner nearby then walked over to the California line cable car terminus.  I rode up Nob Hill, back to my hotel.

This is a fun loop; this is something I have always wanted to do.  The only really new segment for me was the Streetcar.  I rode in one of the decades old vintage street cars.  It was perfectly restored and had a very comfortable ride.  I shared it with about 15 other people.  I have seen the street cars before; they look really cool, but they aren’t the same as the cable car… much more like a bus.

On the loop, I took pictures of some noteworthy sites:  “Hearts in San Francisco” and Dewey Monument in Union Square Park, Westin St Francis, Sam’s Cable Car Lounge, Blue Mermaid bar in Fisherman’s Wharf, Ferry Building Clock Tower, San Francisco–Oakland Bay Bridge, Gandhi Statue in the Golden Gate Ferry Terminal, Southern Pacific Building, Vaillancourt Fountain, and the The Pacific-Union Club, Grace Cathedral and the Mark Hopkins hotel on Nob Hill.

I bought a one day pass for the cable cars.  It came in handy for this trip, which otherwise would have cost 3 separate fares.  I did this at night, a little bit late.  This was mostly ok, because during the day it is alot more crowded.
Cable Car Loop: Union Square taking Powell/Mason line to Embarcadero Streetcar to California Line to Nob Hill at EveryTrail

EveryTrail – Find the best Walking Tours in San Francisco, California