Adding a new objective to Calypso Forward Ladder

I’ve been spending the last 6 weeks configuring the Forward Ladder in Calypso 11 in order to get a real-time view into FX risk from Opics.  While I actually did some work on the Forward Ladder analysis when I was back at Calypso, they did a complete rewrite for version 11 and I must say that, while I did encounter a few issues, it is well thought out and quite extensible.  As such, I thought I’d share how easy it is to plug in a new objective in the forward ladder.

The request was to display the discount factor (df, for short) which is unfortunately not available in the new ForwardLadder analysis.   While it was a column in one of the earlier incantations of the analysis (i.e., FwdLadder), it seems to have disappeared in this latest version.   We could have of course asked for a P6 enhancement request, but that probably means we’d have had to roll this out next year as opposed to this year, so that was a no go.   Digging into the code, I came across the Forward Ladder Registry, a class that allows you to plug in and/or modify the objectives and flow generation extremely easily.

The ForwardLadderRegistry class actually reads the configuration from an XML file included in the resources.jar called com/calypso/tk/risk/ForwardLadderGeneratorMappings.xml.  Here’s a snippet from that file:

<registry>
    <global>
        <objectiveGenerators>
            <generator>
                <objectiveClass>com.calypso.tk.risk.forwardladder.objective.ForwardLadderObjectiveCore</objectiveClass>
                <generatorClass>com.calypso.tk.risk.forwardladder.generator.DefaultCoreGenerator</generatorClass>
            </generator>
            <generator>
                <objectiveClass>com.calypso.tk.risk.forwardladder.objective.ForwardLadderObjectiveCash</objectiveClass>
                <generatorClass>com.calypso.tk.risk.forwardladder.generator.DefaultCashFlowAmountCashGenerator</generatorClass>
            </generator>
            ...
        </objectiveGenerators>
    </global>
    ...
</registry>

I decided to create a new objective called MarketData in order to bring in the df, as requested by the users.   Thus, I added the following lines to the xml file:

            ...
            <generator>
                <objectiveClass>com.myco.tk.risk.forwardladder.objective.ForwardLadderObjectiveMarketData</objectiveClass>
                <generatorClass>com.myco.tk.risk.forwardladder.generator.DefaultMarketDataGenerator</generatorClass>
            </generator>
            ...

That’s easy enough, right?   The code for the ForwardLadderObjectiveMarketData is trivial enough and you can easily implement it based on one of the other objectives supplied by Calypso.   The meat & bones, however, resides in the DefaultMarketDataGenerator which actually implements the ForwardLadderObjectiveGeneratorI interface.  (Incidentally, I’m not quite sure why Calypso forces you to implement 2 interfaces here.   I think one interface could easily have described the contract for both the objective metadata and the actual implementation!  Go figure…)

Here’s what the code looks like to display the discount factor:

public class DefaultMarketDataGenerator implements ForwardLadderObjectiveGeneratorI {
...
public void populateObjectiveData(ForwardLadderObjectiveI objective, FlowGenerationContextI context,
List<ForwardLadderFlowItem> flows) { Trade trade = context.getTrade(); PricingEnvBasedFlowGenerationContext peContext = (PricingEnvBasedFlowGenerationContext)context; for (ForwardLadderFlowItem flow : flows) { for (ColumnMetaData metaData : objective.getObjectiveColumnMetaData(context.getParams())) { String dataName = metaData.getName(); if (ForwardLadderObjectiveMarketData.CURRENCY_DF.equals(dataName)) { CashFlow cf = flow.getCashFlow(); String ccy = cf.getCurrency(); CurveZero curve = peContext.getPricingEnv().getPricerConfig().getDiscountZeroCurve(ccy); double df = 1.; try { df = curve.getDF(trade.getSettleDate(), QuoteSet.MID); Amount dfDV = new Amount(df, ForwardLadderAnalysisUtil.DEFAULT_CONVERSION_RATE_ROUNDING); ForwardLadderObjectiveData data = new ForwardLadderObjectiveData(metaData, dfDV); flow.setObjectiveData(dataName, data); } catch (Throwable t) { Log.error(LOG_CATEGORY, t); } } } } }
}

That just about does it, believe it or not!   The only thing is you’ll have to tweak one more resource bundle file or the ParamViewer will choke when trying to add the new objective.   Look for a file called com/calypso/bundles/apps/forwardladder/ForwardLadderObjectives.properties in the resources.jar and add the following properties at the end of that file:

objective.displayname.MarketData=Market Data
objective.help.MarketData=

As long as your classes and modified resource files come before Calypso’s in your classpath, you’ll be able to add this new objective to your Forward Ladder.

Covering a Short != Selling a Long

I spent most of the day yesterday listening to the Goldman Sachs testimony.  Since I missed a few hours, I decided to revisit those parts I missed earlier today at the C-Span archives.   I found it quite interesting how the Goldman Sachs traders spun their tales to deceive the Subcommittee.   In particular, they often mentioned that their goal throughout 2006 and 2007 was to reduce risk.  Yeah, fair enough.  I’ve got to imagine with all the volatility in the market at the time, the middle and back office would have pushed pretty hard to reduce the risk.

One interesting thing I thought I’d dig into deeper is the difference between selling a long position and covering a short position.  The witnesses often repeated the mantra that not only did they reduce their risk by selling their long position but they often also reduced their risk by covering their short position.

Well, yes.   But not all risk is created equal, now, is it?

Between 2006 and 2008, to cover your short positions in the market basically meant you were “locking in” your profits.  That’s because the mortgage backed security market was basically in freefall.   By locking in your profits, you may indeed be reducing risk; risk that your counterparty (cough… Bear Stearns… cough) will not be around to pay you tomorrow.   Either way, when you cover your shorts, you realize some amount of profit and reduce the company’s overall counterparty risk.

Selling your long positions between 2006 and 2008, however, was a whole different ballgame.   In fact, I’d venture to guess that Goldman Sachs did very little of that because by the time they wanted to sell, there were probably no more buyers and therefore no market liquidity.   They had no easy way to sell their longs since the buyer might have only been willing to pay pennies on the dollar.   Instead, Goldman Sachs most probably offset these losses by taking opposite short positions.   If they indeed made such spectacular profits in 2007, they must have indeed done the Big Short by several orders of magnitude more than their long positions.   In any case, your aim in selling your longs is to minimize your market risk.

It’s interesting that the Goldman Sachs witnesses would combine market risk and counterparty risk and just call it risk in order to say their aim was to “minimize” risk and not go directional.   Nice rhetoric guys.

ABX: A blast from the past

I’m listening to the Goldman Sachs testimony and as they float around words like CDO’s and ABX, I thought I’d check on what the ABX prices currently are as per Markit.   If you recall, I actually wrote the code for CDS on ABX back when I was at Countrywide in 2006 and 2007.

Wow.

Factors Effective: 2010-04-26
27-Apr-10 Overview
Index Series Version Coupon RED ID Price High Low Factor
ABX.HE.PENAAA.07-2 7 2 76 0A08AWAD1 47.31 70.00 24.56 0.995918674
ABX.HE.AAA.07-2 7 2 76 0A08AHAD4 44.58 99.33 23.10 0.995918674
ABX.HE.AA.07-2 7 2 192 0A08AGAD6 5.32 97.00 3.75 0.842675865
ABX.HE.A.07-2 7 2 369 0A08AFAD8 4.63 81.94 2.97 0.383809843
ABX.HE.BBB.07-2 7 2 500 0A08AIAD2 3.58 56.61 2.89 0.15
ABX.HE.BBB-.07-2 7 2 500 0A08AOAD9 3.58 50.33 2.88 0.15
ABX.HE.PENAAA.07-1 7 1 9 0A08AWAC3 54.35 80.27 28.36 0.998941369
ABX.HE.AAA.07-1 7 1 9 0A08AHAC6 44.85 100.09 23.25 1
ABX.HE.AA.07-1 7 1 15 0A08AGAC8 5.13 100.09 2.86 0.827149338
ABX.HE.A.07-1 7 1 64 0A08AFAC0 3.77 100.01 2.33 0.432376793
ABX.HE.BBB.07-1 7 1 224 0A08AIAC4 3.68 98.35 2.19 0.15
ABX.HE.BBB-.07-1 7 1 389 0A08AOAC1 3.66 97.47 2.19 0.117129675
ABX.HE.PENAAA.06-2 6 2 11 0A08AWAB5 77.88 93.88 53.04 0.715876842
ABX.HE.AAA.06-2 6 2 11 0A08AHAB8 57.29 100.12 28.72 0.984457918
ABX.HE.AA.06-2 6 2 17 0A08AGAB0 14.83 100.12 6.94 0.92237709
ABX.HE.A.06-2 6 2 44 0A08AFAB2 5.20 100.12 3.42 0.431335167
ABX.HE.BBB.06-2 6 2 133 0A08AIAB6 6.78 100.59 2.29 0.111413346
ABX.HE.BBB-.06-2 6 2 242 0A08AOAB3 6.02 100.94 2.34 0.1
ABX.HE.PENAAA.06-1 6 1 18 0A08AWAA7 88.24 98.50 82.18 0.181987649
ABX.HE.AAA.06-1 6 1 18 0A08AHAA1 88.94 100.38 59.75 0.828696827
ABX.HE.AA.06-1 6 1 32 0A08AGAA9 45.83 100.73 15.90 0.972754188
ABX.HE.A.06-1 6 1 54 0A08AFAA7 14.79 100.51 7.50 0.826844803
ABX.HE.BBB.06-1 6 1 154 0A08AIAA4 4.98 101.20 3.95 0.424882857
ABX.HE.BBB-.06-1 6 1 267 0A08AOAA2 5.03 102.19 3.90 0.285366883

I realize this is somewhat cryptic, but take a look, say, at the price for ABX.HE.AAA.07-2. This was the AAA tranche of an ABS Index made up of Home Equity (HE) issued sometime around July of 2007. Do you see the price? 44.85. The price started at 100.0 at issue so for this particular Senior investment grade tranche, you’d have lost more than 50% of your money if you’d invested back in 2007.

Wow.

If you decided to take a little risk back then, you might have bought the Mezzanine branch, as the good ol’ boys at Goldman Sachs call it.    If you’d bought $100 worth, guess how much you’d have right about now?   $3.58.

If you think we’ve come even close to hitting housing bottom, I’ve got a bridge to sell you!

Adding custom Calypso packages by adding a new jar file

After my very interesting and enlightening attendance at The Server Side Java Symposium last month, I’ve decided to start exploring new technologies and how better do this than by creating some Open Source tools and libraries.  Since I’ve been working almost exclusively with the Calypso API for the last decade, it’s somewhat logical that I would start there.  I decided that my first project is to integrate Calypso with GridGain as Dispatcher to distribute Risk Analysis execution on the Grid.   Calypso has its own Dispatcher implementation, but they themselves acknowledge it’s not ready for prime-time.   They do sell an adapter to plug into DataSynapse as well, but I figured an Open-Source alternative would be a worthwhile tool for Calypso implementors.  Besides, once I get it all up and running, I want to play with Scala as it seems to integrate very easily with GridGain.

So last week I launched Eclipse and created a new project.   As I began creating a new package, com.steepi, I paused.   In order to plug my package into Calypso, I would need to implement CustomGetPackages in the calypsox.tk.util package.  But if I do that, how is a Calypso implementor going to use it?   After all, they most certainly already have their own instance of CustomGetPackages!   Now granted, they certainly could make the modification to their class to add my packages.   Still.  What if I truly wanted to provide a library that would attach my packages simply by adding a jar to the CLASSPATH?   This problem merited further investigation…

After a few hours of research, I found a solution by locating an undocumented feature of Calypso.   During startup, the AppStarter class will at some point try to instantiate calypsox.apps.main.UserStartup.   If an instance of this class is found by InstantiateUtil, the method start() will be called via reflection.   Just the hook I needed!   I could place my custom code there to attach my custom packages.

Wait, though…

How would I go about doing so?   Calypso’s API does not provide a method to add packages to InstantiateUtil.   It’s all done within a static block when the class is loaded.   Thankfully, I’ve encountered limitations with the Calypso API before and the way to circumvent this problem is to use Java reflection to render methods and fields accessible.   Here, then, is the code that does exactly what I wanted!  If you compile the following code and add it to your CLASSPATH, you’ll be able to attach custom packages to existing Calypso implementation projects that have their own CustomGetPackages implementation. It’s a nifty way to provide a third-party library on top of Calypso, don’t you think?

1:  package calypsox.apps.main;
2:
3:  import java.lang.reflect.Field;
4:  import java.util.Collections;
5:  import java.util.List;
6:
7:  import com.calypso.tk.core.Log;
8:  import com.calypso.tk.util.InstantiateUtil;
9:
10: public class UserStartup  {
11:     public void start() {
12:         // In order to attach our packages, we operate on
13:         // InstantiateUtil class using reflection so as to
14:         // reach through its encapsulation.
15:         Class clazz = InstantiateUtil.class;
16:         Field field;
17:         List packages = null;
18:         try {
19:             // Retrieve the _packages field in InstantiateUtil
20:             field = clazz.getDeclaredField("_packages");
21:             // Since this is a private field, we need to set it
22:             // to accessible so we can access it
23:             field.setAccessible(true);
24:             // Get the value of the field via reflection.
25:             // This is the actual List object
26:             packages = (List)field.get(clazz);
27:             // Add Steepi packages so they are available when
28:             // instantiating through reflection
29:             packages.add(0,"com.steepi");
30:         }
31:         catch (Throwable t) {
32:             Log.error("Error", "Unable to locate InstantiateUtil._packages via reflection.", t);
33:             return;
34:         }
35:         // TODO: Apply same logic to update _invertPackages field as well
36:     }
37: }

As noted inline, you’ll want to do the same to update the inverted packages field. I’ve omitted the code for brevity.

This solution should work with any version of Calypso prior to 11.1.   Who knows whether or not they will eventually remove this logic since it is, after all, undocumented.   Still, for the time being, it should be easier to deploy my packages in existing Calypso implementations without needing any code change.   Sweet!

TSSJS2010: JVM Languages for mission-critical applications

The slides for this presentation can be found here

JSR-223: Scripting & what it means to you.

You want to use scripting languages to implement code faster through continous prototyping.

Scripting is built right into Java 6.   Spring has limited support, though.  But it is available on Mule ESB, ServiceMix, and other Spring containers.

Take a look at the javax.script package

Reasons for Scripting

  • Prototyping
  • Better tools for problem domain

Jawk – Processing a lot of text
Jython – System Management tools
XSLT – XML Manipulation

If the only reason you’re using scripting language is for dynamic code deployment, though, you should instead look at OSGi.

There’s also a tool called JRebel that’s worth taking a look at.

The Java Language is Awesome as long as you don’t break its abstractions!

Mule is great for pulling services together across multiple protocols.

TSSJS2010: The Cloud Computing Continuum

Bob McWhirter by far gave the best keynote session I’ve seen yet.   His delivery is humorous, he’s got great stage presence, and the slides were very creative.  Furthermore, he was best able to give me an understanding on what the hell the cloud is anyway.

The cloud is the next logical step in application delivery.

First, we jammed some servers in some closets.
Then we did some colo.
After that, we leased some managed servers.

With the cloud, we lease what appears to be servers.

The Cloud Computing Stack

Software as a Service (SaaS) – Abstracts software
Platform as a Service (PaaS) – Abstracts Services
Infrastructure as a Service (IaaS) – Abstracts Hardware

Tenets of the Cloud

  1. Illusion of Infinite Resources
  2. On Demand Acquisition and Provisioning
  3. Someone Else’s Responsibility

Virtualization makes it affordable to make mistakes.  Repeatedly.

The network isn’t all that slow, compared to spinning disks.

Messaging (REST-MQ)
Scheduling
Security & Identity (OASIS)
Computation (Query and otherwise)
Transaction (REST-TX)
BPM
Telephony

It’s a stack of services, instead of a stack of software.

More info at http://cloudpress.org

TSSJS2010: GWT fu: Going Places with Google Web Toolkit

This session provides an introduction to the GWT for designing web clients.

With GWT 2.0, the Hosted mode browser plugin allows you to run in the various browser directly.

It’s probably due to limitations from my hangover, but apparently we’re running Javascript on the client but it looks like we’re writing Java.  Is GWT somehow converting my Java into Javascript?  I think that Geary alluded to that fact, but I’m not quite sure.  The code for ClickHandler certainly looks like Java though.   As a matter of fact, I later got confirmation that this is indeed the case.   With GWT, you write Java and GWT takes care of generating the Javascript!  So sweet!!

Hosted mode browser plugin – running in the various browsers directly
Code splitting is a useful pattern to use.  Because GWT applications are downloaded to the client as Javascript, they can get pretty big pretty fast.  Code splitting lets you split out your code and lazy download as needed.

Layout panels provide an easy way to create layouts without too much fuss a la Swing.

Client bundles lets you pull down a bundle of resources in one http request

WebAppCreator application is used to generate the web application:

webAppCreator com.clarity.SimpleApp

GWT now creates a bunch of stuff for us.

ant hosted

Import project into Eclipse.   Nice to know it integrates seamlessly with Eclipse, by the way.  😉

All GWT Applications are modules.  From the code, it looks like the server implements a Remote interface.   Also it would seem that GWT allows you to plug into Hibernate.  It isn’t just a Web framework, then?

Event Handlers

Event Handlers are very similar to Swing/AWT/SWT listeners.   They’re typically implemented in anonymous inner classes.

Swing came with adapter classes for event handling with no-ops.   GWT first came out with the same paradigm.   Not all GWT widgets are notified about all events.   You can sync events on a widget to be notified of events you typically wouldn’t receive.

History

GWT has a history mechanism as well.  It’s apparently very slick and it handles something of a state, but I’m not quite understaning why it’s so slick.   Then again, I was at the Blackjack table until 2:30 AM drinking free Gin and Tonics, so my mental capacity is somewhat muted.

Go, Speed Tracer, Go!

Speed Tracer monitors any web application and gives all kinds of data as to how many requests are being made, how long it takes, etc.  In short, it’s a GWT supplied performance analysis app and it’s good to use to improve the performance of your web app.

Code Splitting

Code Splitting is achieved by surrounding the code you don’t need right away inside a GWT.runAsync() block.   That allows for lazy-loading of code that’s not needed right away.

Best Practices

  • Use History from outset
  • Design for UIBinder from the outset
  • Consider MVP and on event bus
  • Use Speed Tracer and code splitting for performance

GWT, in many respects, is like Swing.  It allows you to build your web page GUI using a very Swing-like API.  No Javascript, No Ajax.  Just pure 100% Java goodness!

TSSJS2010: Mission Critical Enterprise/Cloud Application Case Study

Not a good start.   The speaker, Eugene Ciurana, is nowhere to be found.   Apparently he’s on his way.  Hopefully his mission critical apps are more responsive than he is.  😉   He finally arrives out of breath as he was apparently working out at the gym.   After he catches his breath, we begin.

The presentation is about how to design, implement and roll-out cloud/enterprise hybrid applications.   There are a lot of different cloud architectures:  PaaS, SaaS, and IaaS (infrastructure as a service).   Technologies include: ESB, clods, mini-clouds, Java, App Engine, Chef!/Puppet, etc.    Cloud technologies lower your cost:

* Which applications are best suited for cloud deployment?
* Identify the advantages of PaaS or SaaS resources?
* What are the caveats of cross-platform and cross-language integration?
* High-performance alternatives to XML serialization for data exchange?  (i.e., JSON)

What is the Cloud Anyway?
* Platform as a Service (Amazon, Google, Rackspace, etc.)
* Software as a Service (Salesforce.com, Amazon)
* Infrastructure as a Service (IBM, HP)
* Pure-infrastructure (Data centers)

Cloud Services Features
* Quick deployment of prepackaged components
* Uses commodity, virtualized hardware and network resources
* “Pay as you consume”
* Horizontal scalability is achieved by adding or removing resources as needed
* May host full applications or only services
* They could replace the data centre
* Basic administration moves to the application owner
* For the bean counters… it’s an operational expense!
* Assuming sensible SLAs, the ROI is better than for co-located or company-owned data centres (but won’t achieve 4 9s)

Uptime != Availability.   Many factors affect availability: network, storage, process

Hybrid Cloud Architecture

* Many mission-critical systems will live behind the corporate firewall
* The cloud is used for high-load applications and services
* The cloud applications work independentlyof the data center applications, and vice-versa.

Case Study: Video game company with  hybrid cloud application

Objectives
* Stable architecture
* Low cost
* Build scalability whenever possible
* Optimal data transfer rate for all properties

Initially, there were some 8 machines that were running everything: QA, PROD, debugging, etc.   The performance was atrocious!

Phase 1: Scalability

* Introduces a CDN for asset delivery (media, images)
– Amazon S3 for asset delivery
– Reduces load on company servers and bandwidth costs
* Introduces database replication for production environments
* Establishes a continuous integration environment
– Improved build/release process
* Fail-over with traditional database replication techniques

Phase 2: Cloud Deployment
* Web applications move to a uniform technology (.Net)
* The database and stored procedures normalized and optimized
* Applications use common resources via Mule ESB and services
– No more direct calls from apps to database
– Business logic is implemented as stateless POJOs
* Software stacks was best of breed
* Web and other RPC services must coexist
– Different partners use different protocols
* Bandwidth can be expen$ive!
* Data exchange protocols
– clients: custom, XML, JSON
– images: HTTPD, S3
– Cloud-to-HQ: custom, XML/SOAP, protocol buffers
– HQ data center: XML/SOAP, protocol buffers
* Replication strategy: data centre
– The cloud isn’t trustworthy yet
* Deployment involves using an Amazon Machine Image (AMI)
* AMIs need a post-configuration step in a load-balanced environment
* Elastic Load Balancer and Elastic IP limitations

And so it ends.   This session clearly pitched to C-levels (in particular CIOs.)  There was very little meat and substance for Java geeks to get their hands on.   It’s a shame.   Clearly Eugene has a lot of knowledge, but this was just way too high in the clouds (pun intended)

TSSJS2010: Cloud Computing with Scala & GridGain

I’m here at the beautiful Caesar’s Palace waiting to hear more about Scala.   James Gosling’s keynote speech this morning was a bit of a letdown.   I was hoping to hear a visionary speech from the father of Java, perhaps not with Obama’s oratory skills, but nevertheless captivating.   What I got instead was a 60 minute informercial on Sun’s current product offerings.   Yes, Java EE 6 and GlassFish 3.0 are totally slick, and I do like the concept of using annotations for event handling.   That said, I was hoping he would talk more about the future of Java, especially with all the Oracle merger and everything.   Anyway, no-go.   So now I’m here waiting to hear what the hell is Scala and how I can use it.   From what I’ve read, they’re using it at Twitter, so it’s gotta be somewhat scalable, right?  😉

The presentation is broken down with 20% talking and 80% live coding.   Nikita Ivanov, the presenter, has a very good stage presence and I enjoy his direct approach.

We start with the talking part by defining a few terms for us neophytes (including myself!)  What is Grid/Cloud Computing?   A Grid is defined as two or more computers working in parallel.   Grid Computing is comprised of Computer Grids + Data Grids.    The Cloud, meanwhile, is comprised of Data Center Virtualization.  Clouds are the new way to deploy and run grid applications.

Why Grid/Cloud Computing?

It solves problems often unsolvable otherwise.   Google has ~1,000,000 nodes in its grid.   Put another way, it’s about money.   Amazon says that 100 ms latency cost 1% of sales.  Google says that 500 ms latency drops traffic 20%.   In the financial sector, one millisecond costs $4M in currency markets.

GridGain at a Glance

The project was started in 2005 as Java-based Cloud Development Platform:
* Compute Grid (aka, MapReduce)
* Data Grid (aka. Distributed Cache)
* Auto-scaling on the cloud

Scala at a Glance

* Started in 2004 by Matin Odersky at EPFL  (author of ‘javac’ and Java Generics)
* Scala is Post-Functional Language (combines functional and OO approach)
* Fully inter-compatible with Java (runs on JVM, Call Java and called from Java methods)
* Statically typed (Unique and powerful type inference)

Apparently, there’s more use of Scala in Europe than there is here in the US.   A large national French bank already has a dozen Scala projects rolling out.   In the audience here today, only one hand was raised when asked who was using Scala today?

Why Scala?
* Performance largely equal to Java
* Statically typed
* Inter-compatible with Java
* Scalable language

Scalar – Scala-based cloud computing DSL + GridGain 3.0
* Uses Scala
– Functional-impertive
– Runs on JVM
– Reuses 100% Java libraries
* Running on top of GridGain 3.0 runtime

DSL – Domain Specific Language
* Provide simple cloud computing model
* Draws on functional features of Scala
* Dramatically simplifies cloud computing applications

The demo is to build a Scala grid application in 10 minutes!

import org.gridgain.grid.gridify.Gridify
import org.gridgain.grid.GridFactory

object ScalaDemo {
    def main args: Array string {
        GridFactory start

        try {
            say "hello Scala Las Vegas"
        }
        finally
            GridFactory stop true
        }
    }

    @Gridify
    def say msg: String   { println msg }
}

This code automatically deployed the object to the Grid and executed it on one of the nodes!   That’s mindblowing and so cool!   Clearly there’s tons of complexity behind the scenes.   The class definition must be serialized, the grid must be located, some scheduler must identify the node on which the object is to run.   I’m totally blown away by how painless it is to deploy on a grid!

Ivanov proceeded to write a task in about 10 lines of code that split the string into words and dispatched the job of printing those words onto the various nodes.

The same demo was then given using Scalars, which is a GridGain construct.    That got a little more cryptic, in my humble opinion.   Still, it did require less lines of codes to get the job done.   The tradeoff, though, seems to be readability.  You’re slowly creeping into the world of Python, which I’ve never enjoyed because only the original developer can ever make bug fixes.

Still, Scala clearly is pretty slick and it looks like the integration with GridGain in order to parallelize tasks is very easy.   I’ll definitely need to investigate how I can leverage this for risk analysis.

Organize the World’s Services

Etsy is “your place to buy and sell all things handmade.”  It’s a wildly successful startup that basically open sources the exchange of goods.   I must admit I haven’t spent much time there, but I think it’s a good concept.   Really, though, what’s been tickling my fancy is Umair Haque’s Manifesto for the Next Industrial Revolution.  Umair is basically calling for a revolution in 21st century Capitalism.  I won’t even begin to paraphrase his argument because it wouldn’t be fair to his brilliant writing.  Suffice it to say that if you’re not following Umair, you’re seriously missing out on a brilliant new voice of our generation.

My take on this whole thing is that we’ve moved from a society which bartered and exchanged and, due to physical and technological limitations, forced us to socialize and interact with others, to a society where there’s very little need for interaction with others anymore.   The concept of currency greatly facilitated things, surely, by defining a system of value.   A dollar is worth a certain amount of food, or can be used to buy a certain amount of time from someone else.   Most important, it’s fungible and allows for that value to easily be transported from one place to another.   This has removed a lot of frictions in the economy, made capital more liquid, and facilitated technical innovation.   Unfortunately, it’s also put way too much value in currency itself.  It has now become more important to acquire currency than to cultivate relationships or share time with others.  Most of the interactions we still do have with others are adversary in nature:  transactions at the checkout counter, customer service for product repair, coworker politics to try and get the corner office.   Is it any wonder our society has become so agressive with one another?

While I’m certainly not the most social individual, I was blown away when living in L.A. by the fact that I did not know my neighbors.   I knew they were there and I recognized their cars.   Garage doors would open in the morning and at night and cars would enter and exit.   Whatever happened to the people?   I told my wife a few days ago that this longing for interaction might explain why people cling to smoking, especially here in California.  It affords them the ability to go out and share a smoke and a conversation.   Has the price to pay for a nice conversation with a fellow human being become lung cancer?

The Great Recession may be inadvertently bringing about a great awakening.  One out of ten American is now unemployed.   One out of four Spaniard is too.   This brings about, of course, a lot of suffering and anguish.  It does too, however, bring about a lot of people with free time.  Instead of focusing on the negatives of this forced time off, why not try and facilitate its use?

So I’ve been thinking about Umair’s manifesto in this context.   While there’s no doubt all of these people are eager to get back to work, why not create a platform to organize all the world’s services?  Perhaps you’re an out-of-work graphic designer who’d love to trade a couple of business card designs for an evening of babysitting and a nice romantic meal with your spouse?   There are all kinds of services out there that, other than the acquired expertise, costs no more than just time and, perhaps, basic supplies:   babysitting, massages & facials, house cleaning, cooking, gardening, moving, handyman jobs…  The list goes on.

Why not create an open source service exchange with a new unit of value: the hour.   If you’re not working, why not let your hour of graphic design equal one hour of babysitting.   In an hour, an unemployed Chef could whip up a wonderful culinary meal.   Why not exchange it against an hour of electrical work wiring up a new lamp in the living room?    In short, why not provide the Etsy of services, allowing people to accumulate time units that they can redeem against other services.   Certainly with a tight budget, there’s no way I’m going to go get a massage at a spa for $150.   I still would love to get a massage, however, and would gladly trade for an hour of my expertise.

Why not?  I understand the reluctance… Some will say that it is a much bigger investment to become a Java expert than an auto mechanic, but if the programmer and mechanic are both unemployed, what they both have to offer that doesn’t cost anything is time.   Hence, this exchange would basically allow for people to buy and sell their time.   Any additional supplies or goods necessary could be supplied by either the buyer or the seller.   There you go:  you’ve got a platform to organize the world’s services.

What do you think about this?   Am I missing a huge piece that makes this idea completely unrealistic?