Skip to main content

Posts

Showing posts from 2012

Go over the Fiscal Cliff: Bond Style

There seems to be an obvious approach to the impending fiscal cliff:  Fall off it, and then get rescued - Bond style. Why?  Well, the short-sighted Republicans who have signed onto the "no new taxes" mandate will never back down - that would involve admitting they were wrong.  So, there will be no Grand Bargain. The Fiscal Cliff, however, has new taxes built in, so they would not have to vote for it....they could just let it happen. If both parties got together today and started to negotiate a "plan to put in place as soon as the fiscal cliff kicks in", they can have their cake and eat it too.  The Republicans can actually negotiate a post-cliff tax cut , the Democrats can allow some cuts to Government spending while rescuing a few key areas....everyone is a winner. Chances of this happening: about 0%

Software eats the Government

Andreessen Horowitz are well known for the theory that "software will eat the world."  I am a big believer in this.  I was recently asked to speak at the Banff Forum, a Canadian think-tank, and talked about this, combined with how "open development" alongside software would challenge everything we do over the next twenty-five to fifty years.  It is fairly easy to point to software eating the world with books (Amazon), travel (Expedia and others), trading, etc.  It was a little bit harder for me to find great examples about how software will eat government, beyond the standard open.gov data initiatives. Thus, I was quite intrigued with Clay Shirky's TED talk about how git , the software version control system, could apply to law and the democratic process.  In Clay's words, it is a new form of arguing, that is compatible with the democratic process. (Nice to Mozilla in the top seven links on the site :-)   Git's distributed content management is

Entropy

I am reading two books at the same time: The Information by James Gleich and Cycles of Time by Roger Penrose. By coincidence, both have Entropy at their core. Roger Penrose develops Conformal Cyclic Cosmology (CCC), primarily from an analysis of the exceptionally low entropy at the the time of the Big Bang, and his argument that the theory of "inflation" in the early universe does not sync up with Entropy calculations. Gleich, on the other hand, studies the rise of information, and while the whole book is interesting, I found the sections on Claude Shannon and the development of Information Theory to be the most interesting.  While I have used the outcome of Shannon's theory in some of my digital signal processing projects (years ago!), I did not know that his original insight was around Information Entropy.  That's pretty cool. Why am I reading two books at once?  Well, the Penrose book is tough going!  The arguments are hard to follow, the writing is

The Fourth R.

Reading, wRiting, aRithmetic, and algoRithms.  My wife and I were just brainstorming about this: how coding should be the next "basic" skill.  Of course, someone was ahead of us and posted this .  It is awesome to see Mozilla Hackasaurus referenced in this article.  It is a small world. In the early days of the printing press, scholars wrote the books; the press was simply used for production (see this article ).  As time went on, "average" people became familiar with the medium, and used it for their own messages.  We are at just that point with the Web.  Software Engineers write the code, and the Web distributes it.   Software Engineers are the algoRithm scholars of today.  They won't be for long.  Soon algoRithms will be taught starting in elementary school, along with the other three R's.

Connectome as a Book

Your Connectome is a map of your brain.  Every neuron, every synapse. I am only a few pages into Connectome, but was intrigued by a sentence: "Human DNA....has three billion letters....would be a million pages long if printed as a book."  The companion question, "How many pages for the Connectome?" might be answered later in the book, but I thought I would take a shot at it here. Here is the punchline: Your Connectome book is 6.7 million times longer than your DNA book. That human DNA is about a million pages is not too surprising, although it probably is not optimized. According to quora there are between 1500 and 1800 letters per page.  I am going to use round numbers, namely 2000.  Then, the 3x10^9 DNA letters would actually be 1.5 million pages.  But this is very wasteful.  Even using just ASCII we can encode four DNA letters per character, so the book should really only be about 400K pages.  And, this book is much more interesting; instead of endless GA

facebook.com/realtimefinancials

[ This is a thought experiment arising from speculation on if Facebook would give quarterly guidance once they are public; implementing this thought experiment is not realistic :-/ ]  I wrote a few weeks ago about " Fixing the Game ", by Roger Martin, which details how the public markets have become a game of Expectations versus Reality.  He argues that, unless this changes, the capital markets are in trouble. A lot of the Expectations Game is enabled by public companies giving quarterly and yearly guidance, leading to "gambling" behavior by investors.  One of Martin's recommendations is to repeal safe harbor regulations, which would strongly discourage companies and executives from giving forward guidance. Another (extreme) approach would be have companies post real time (unaudited) financial information. While very difficult for some types of business, doing so for an online advertising business is possible. Real time financials would not only make

Facebook IPO - the new normal?

It used to be that an IPO was: The first major liquidity event for shareholders A means to raise growth capital The first time a company was "market priced" due to (1). Because private market trading is now so prevalent, even small companies can achieve liquidity early in their lifecycle, with significant trading volume.  This also means that the companies will have fairly accurate market pricing.  If anything, they may be overpriced as the private trading system attracts earlier, higher risk investors. The secondary markets also provide a means for companies to attract early growth capital...with a lot less hassle than an IPO.  Companies will probably stay on secondary markets until they are more mature, meaning more of the upside value will have been realized before they go public. This is certainly the case with Facebook.  They are mature, they are profitable, and they raised a lot of their growth capital already.  The IPO (rumored to be raising $10B) is more ab

Microsoft: Pay me $250 instead

If this article is accurate, Microsoft is paying Nokia almost $250 for every Windows phone that Nokia ships.  The payback, ostensibly, is twofold: Wide enough adoption that Microsoft becomes a player in mobile People, through usage, will stick to Microsoft services, and become long term customers. I wonder if Microsoft could achieve both aims through a software-only play?  I imagine buying my new Android phone, and then installing "Windows Phone 8", the App, for which Microsoft will pay me $20/month for every month that I am an active user.  They can do that for 12 months for the same amount that they are paying to Nokia, so they have a full year to make me a believer in Microsoft solutions. Of course, Google may react and try to shut down, or limit, such a practice.....but operators might endorse it.  More Microsoft services, more data usage. The marketing tradeoff is straightforward: is it easier to get someone to download the Windows 8 App, or to purchase a Noki

BrowserID in the Edison Quadrant?

I have been reading up on Donald Stokes theories of innovation, which, for some reason I had not seen before.   It is quite philosophical, but has some interesting points.  The main one is that "the path to innovative products" does not always start from pure research and evolve towards useful products.  Instead, research often moves between quadrants - both left and right as well as up and down.  Sometimes, for example, very applied research will highlight a fundamental technology gap, which then drives use-inspired basic research.  Beyond Bohr, Pasteur, and Edison, I was trying to map some other projects into the matrix.  DARPA, for example, is focused in the Pasteur Quadrant, while the CERN work is certainly Bohr-ish :-) BrowserID , which we have been developing at Mozilla, is a good example of Edison research.  The fundamental building blocks were available, but they had not been put together in a way that met our "consideration of use" (a sign-in system whic