Category Archives: cloud

Using CloudFoundry with Node.JS and MongoDB

This isn’t so much a question, just a bit of extra documentation for people. There’s not a lot of clear examples on how to make a simple connection from CloudFoundry to Mongo DB when you’re using NodeJS, so I thought I’d stick up this example using the simple to use MongoDB mongoskin wrapper. The MongoDB native driver for Node is a bit awkward when you’re using authentication, so I skipped to the mongoskin layer, which is almost identical.

In your application directory, you need to a file called app.js containing the following code:

var mongodb = require(‘mongoskin’);
var env = JSON.parse(process.env.VCAP_SERVICES);
var mongo = env[‘mongodb-1.8’][0][‘credentials’];
var mongourl = “mongo://” + mongo.username + “:” + mongo.password + “@” + mongo.hostname + “:” + mongo.port + “/” + mongo.db + “?auto_reconnect=true”;
console.log(mongourl);
var db = new mongodb.db( mongourl ); //This is the connection

var http = require(‘http’)
port = Number(process.env.VCAP_APP_PORT || 3000),
host = process.env.VCAP_APP_HOST || ‘localhost’;

http.createServer(function (req, res) {
res.writeHead(200, {‘Content-Type’: ‘text/plain’});
var currentdate = new Date();

db.test.insert({ daterecord:  currentdate }, {}); //This is the insert

res.end(‘Welcome to Cloud Foundry!\n’ +
host + ‘:’ + port + ‘\n’ +
require(‘util’).inspect(process.env, false, null) + ‘\n’ + fullresult + ‘\n’);
}).listen(port, host);
console.log(‘Server running at http://’ + host + ‘:’ + port + ‘/’);
You also need a subdirectory called node_modules containing the mongoskin module. Using the latest version (1.0.3 of npm) to do this, run:
mkdir node_modules
npm install mongoskin
then run
vmc push appname
When asked if you want to bind a service, say Yes, then pick MongoDB.
The username, password, etc, are picked up automatically in the app.js file.
All the script actually does is insert a new record each time you visit the page, containing the current date, then dumps out all the environment variables to screen, so you can see where the database details are gathered from.
Hope this helps some people, if you have any problems with the code please leave me a comment on here!

Creating your own Personal Twitter Archive

I’ve seen enough company failures and changes in terms of service to always like a backup plan for when a service goes missing (plus working on system migrations tends to make you think the worse of data).

As a pretty heavy user of Twitter, I’ve been thinking more about what happens if the service goes bad, or if they simply never fix their broken search function.

To help with both, I’ve written a fairly hacky pair of node.js scripts which collect any mentions of you, and any tweets you send, using the Twitter Search API, and uploaded it to github.

The 2 files are pta_server.js and pta.js.

pta_server.js is meant to be run in the background using something like forever, and connects to Twitter Search, gathers any new tweets, and stores them in a MongoDB data store.

pta.js is a very basic web interface, which connects to the MongoDB data store and retrieves all collected tweets, ordered by date.

Hopefully it’ll come in useful, and I plan on improving both the functionality and the documentation soon. If you want to give it a try, check it out and let me know what you think.

Public Cloud Computing – From “We can’t…” to “We can”

From the first day of public cloud computing, there’s been people saying “We can’t use public cloud computing, because…”, followed by a range of reasons, all perfectly legitimate but generally based around company policies or long-held fears about shared resources, security, and support, rather than technical limitations.

Over the past few years, Amazon and the other public cloud providers have been chipping away at these reasons for not using public cloud computing, with Amazon recently upgrading their “Virtual Private Cloud” offering of a VPN connection to their servers to now include controllable secure networking of their instances.

Now, Amazon have launched “Dedicated Instances“, an offering where you pay a flat rate of an extra $10 per hour per region when you launch any number of dedicated instances. By “dedicated instance”, Amazon mean an instance running on hardware that’s only running instances by you, noone else. No more multitenancy resource fears on the server, reduced worries about over-commitment of hardware resources, potential weaknesses in the Xen hypervisor, etc.

You still get many of the benefits of public clouds – no up-front costs, the massive volumes of AWS leading to lower overheads, commodity services, and so on, you just pay a slightly higher per-hour price to remove one of the major hurdles in moving to public cloud computing.

I’m sure that dedicated EBS will be coming along soon, and perhaps dedicated S3 storage for people using more than something like 10TB of data – the amount that would justify a dedicated shelf of storage replicated to multiple locations?

While these recent moves won’t let everyone use the public cloud to reduce their computing costs and improve their flexibility, it’s a big step in moving people from “We can’t do this because” to “We can do this, now let’s get on with it”.

And that’s got to be good, hasn’t it?