Category / Software and Virtual Worlds

Knight check October 24, 2013 at 8:43 am

Amusing details from the SEC proceeding against HFT firm Knight:

To enable its customers’ participation in the Retail Liquidity Program (“RLP”) at the New York Stock Exchange, which was scheduled to commence on August 1, 2012, Knight made a number of changes to its systems and software code related to its order handling processes. These changes included developing and deploying new software code in [the firm's system] SMARS. SMARS is an automated, high speed, algorithmic router that sends orders into the market for execution. A core function of SMARS is to receive orders passed from other components of Knight’s trading platform (“parent” orders) and then, as needed based on the available liquidity, send one or more representative (or “child”) orders to external venues for execution.

Upon deployment, the new RLP code in SMARS was intended to replace unused code in the relevant portion of the order router. This unused code previously had been used for functionality called “Power Peg,” which Knight had discontinued using many years earlier. Despite the lack of use, the Power Peg functionality remained present and callable at the time of the RLP deployment. The new RLP code also repurposed a flag that was formerly used to activate the Power Peg code. Knight intended to delete the Power Peg code so that when this flag was set to “yes,” the new RLP functionality—rather than Power Peg—would be engaged.

This is like the nubile teens kissing in a horror movie: you just know it is going to end badly.

The baddie with the knife appears soon enough:

When Knight used the Power Peg code previously, as child orders were executed, a cumulative quantity function counted the number of shares of the parent order that had been executed. This feature instructed the code to stop routing child orders after the parent order had been filled completely… Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.

On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution.

And that, ladies and gentlemen, is one way to lose $460M in a few hours.

Yahoo unveils plan to make users hate it October 18, 2013 at 8:10 am

Typically accurate Daily Mash story. The new Yahoo mail interface really sucks.

Going loopy with the SEC May 30, 2013 at 8:19 pm

Thanks to Matt Levine, I have this lovely story of the Facebook IPO on Nasdaq. Matt points us at the SEC account of that dismal day for NASDAQ here. The first few pages are hilarious:

In a typical IPO on NASDAQ, shares of the issuer are sold by the IPO’s underwriters to participating purchasers at approximately midnight and secondary market trading begins later that morning. Secondary trading begins after a designated period – called the ‘Display Only Period’ or ‘DOP’ – during which members can specify the price and quantity of shares that they are willing to buy or sell (along with various other order characteristics), and can also cancel and/or replace previous orders. The DOP usually lasts 15 minutes…

At the end of the DOP, NASDAQ’s “IPO Cross Application” analyzes all of the buy and sell orders to determine the price at which the largest number of shares will trade and then NASDAQ’s matching engine matches buy and sell orders at that price…

NASDAQ’s systems run a ‘validation check’ which confirms that the orders in the IPO Cross Application are identical to those in NASDAQ’s matching engine. One reason that the orders might not match is because NASDAQ allowed orders to be cancelled at any time up until the end of the DOP – including the very brief interval during which the IPO Cross Application was calculating the price and volume of the cross. If any of the orders used to calculate the price and volume of the cross had been cancelled during the IPO Cross Application’s calculation process, the validation check would fail and the system would cause the IPO Cross Application to recalculate the price and volume of the cross.

This second calculation by the IPO Cross Application, if necessary, incorporated only the first cancellation received during the first calculation, as well as any new orders that were received between the beginning of the first calculation and the receipt of that first cancellation. Thus, if there were multiple orders cancelled during the first IPO Cross Application’s calculation, the validation check performed after the second calculation would fail again and the IPO Cross Application would need to be run a third time in order to include the second cancellation, as well as any orders received between the first and second cancellations.

Because the share and volume calculations and validation checks occur in a matter of milliseconds it was usually possible for the system to incorporate multiple cancellations (and intervening orders) and produce a calculation that satisfies the validation check after a few cycles of calculation and validation. However, the design of the system created the risk that if orders continued to be cancelled during each recalculation, a repeated cycle of validation checks and re-calculations – known as a ‘loop’ – would occur, preventing NASDAQ’s system from: (i) completing the cross; (ii) reporting the price and volume of the executions in the cross (a report known as the “bulk print”); and (iii) commencing normal secondary market trading.

This is precious in so many ways: and I am sure that you can guess what happened next. Don’t you love the SEC telling us what a loop is? But lolz aside, it does suggest that IT systems written for the pre-HFT era are not necessarily fit for purpose today.

Victims in Vegas April 25, 2013 at 10:18 pm

From Bloomberg:

As Ed Provost took the stage at the Green Valley Ranch Resort & Spa in Las Vegas to explain how a software malfunction had shut the Chicago Board Options Exchange for three-and-a-half hours, he was surrounded by people who were victims of similar disruptions.

On the panel with him were Jeromee Johnson from Bats Global Markets Inc., which canceled its initial public offering last year after failing to get the shares to trade on its own exchange, and Tom Wittman of Nasdaq OMX Group Inc., whose first-quarter profit was cut in half because of costs related to its mishandling of Facebook Inc.’s IPO in May. Six people on the attendee list were from Knight Capital Group Inc., which almost went bankrupt after a software error flooded the equity market with bad trades.

As the article’s author Nikolaj Gammeltoft says, today’s CBOE hickup underscores how common software-related market disrupting events have become. Yesterday’s events were not, so far as I know, fatal to anyone, merely inconveniencing. But should an exchange like CBOE has a monopoly on trading a benchmark contract as important S&P500 index options if they can go down for three hours unexpectedly?

My ipad, and your derivatives April 22, 2013 at 5:55 pm

Those of you with multiple devices – and most of us these days have at least two out of a phone, a tablet, a laptop, a desktop and an ipod-like thing – will be familiar with the nightmare of sync. This is particularly painful with music: you have it nicely set up in one place, yet somehow it ends up as a mess after transfer. A particular culprit here is itunes, which (1) Apple forces you to use and (2) seems to me to be as respectful of my music labeling as a con man at an easy marks convention. As a result, (be warned, painful confession coming up) my ipad thinks I have music by Pink, P!nk and P!ink with a Kanji character on the end that I can’t even copy; it thinks my Brahms fourth symphony is by Karajan and that Stranglers and The Stranglers are different artists. It has UB 40 and UB40, it puts plainsong under ‘Unknown’, and it is very very fond of ‘Unknown Album, Unknown Artist’. This is rather irritating and it takes a while to fix.

Given the state of this relatively small data set, rather little of which was manually entered*, imagine how good banks legal entity identifiers are, given that they could from a much bigger database much of which has been typed in. There is, it is fair to say, the possibility of error. In particular, just like my multiple Pinks, there is some chance of finding The Goldman Sachs Group, Inc. and Goldman Sachs Group, Inc or even The Goldman Sachs Group, Inc in your counterparty database. (That period is easily missed.) And if one bank can’t always get it right, how much harder is it to sync this data across the whole industry? The legal entity identifier (‘LEI’) project tries to do that, and it is much to be applauded. In particular without initiatives like this, trade repository data is a lot less useful.

Deadlines for getting LEI data into shape are approaching. As Katten Muchin Rosenman remind us:

Every swap end user (i.e., any party to an outstanding derivative contract who is not a swap dealer or major swap participant) should be aware that April 10, 2013 is the deadline for obtaining a “CFTC Interim Compliant Identifier” number (or CICI) in connection with its swap activities. The requirement arises under Commodity Futures Trading Commission Rule 45.6(f), which specifies that every “swap counterparty” must use a legal entity identifier (LEI) in all recordkeeping.

Meanwhile the data cleaners are doing a rather more systematic job than me swearing at itunes. This is well underway at many banks but, for those laggards, Mark Davies has a point: “processes relating to business entity reference data will require attention sooner rather than later”.

*Most of my music is from CD, and most of those auto-load the album and track information when they are ripped. The databases that information comes from, mind you, are not perfect.

There may be a twitter feed February 10, 2013 at 11:36 am

If the technology Gods are with me, there is now a twitter feed for DEM.

Blog performance improved? October 14, 2012 at 7:22 am

I have been having some issues with my web server, for which I apologise. Pages have been slow to load and/or not loading at all. Today some maintenance was done which should speed things up, but the installation is still a little shaky so I would really appreciate reports of any performance issues. Either email me or comment to this post please. Oh, and if by chance any of my readers knows about configuring WordPress cache optimizers (such as W3 Total Cache, or Super Cache), I would really, really appreciate a chat. Many thanks.

Gamification August 19, 2012 at 5:31 am

First I have to say that I cordially loathe the word – gameification or gamefication would be better – but the idea is interesting. From Wikipedia:

Gamification is the use of game design techniques, game thinking and game mechanics to enhance non-game contexts.

The key observation is that successful game designers are good at getting people to perform tasks – and enjoy them – so perhaps their techniques can be used to make people well, um, to be blunt, work more.

What techniques are these? Here’s Deloitte:

Hundreds of separate game mechanics principles, behavioral economic theories, and current user experience design thinking can be distilled into four overarching elements, as noted below.

  • Progress paths: the use of challenges and evolving narratives to increase task completion. In games, the next desired action is usually clear. This clarity around objectives is usually not as explicit in real-world scenarios but is added when attaching progress paths to your processes and systems. The complexity of challenges in progress paths also increases over time. Where a novice is rewarded for more basic tasks, a more advanced user requires a challenge of greater difficulty to remain engaged with the system.
  • Feedback and reward: the use of rapid indications of success through virtual and monetary rewards. Games do not wait to reward you: buildings collapse and make noise, scores increase instantly, and virtual money may even change hands. In real-world scenarios, however, an individual’s action may go totally unnoticed or unrewarded. Adding hyperfeedback to a process can provide the right reward at the right time. Designing the right reward, then, becomes the second part of the design challenge…
  • Social connection: leveraging social networks to create competition and provide support. Games have often provided reasons for friends to gather. With the Internet, social networks and now the ability to be social over mobile devices, processes and systems can provide instant access to friends and social connections at any time. This enhances the ability to have conversations and dialogs with other users that increase the level of interaction and engagement.
  • Interface and user experience: aesthetic design and cross-platform integration considerations to enhance fun. Due to improvements in video game graphics and Web page design, many users are increasingly sophisticated when it comes to expectations for technology services. This presents a challenge for businesses with limited design staff. It also presents an opportunity for organizations that are able to either rapidly increase their design competency or network with firms who can fulfill that roll.

There are several things that I find interesting about this.

First, trading is very much like a game already. It has clear progress paths, immediate rewards, a social interface, and compelling user interfaces. Trading in many markets is very much like playing a really good computer game, only you get paid a lot to do it, and the hardware is much better than the average games console. The dirty secret is of course that many traders have so much fun at work they would probably do it for nothing.

Second, it really is amazing how much some people want to play some games, and hence the potential to enhance productivity if you can get gamification right. Obviously I would never advocate trying to create symptoms similar to video game addiction, but even making a dull job a little bit more fun may well be worth doing for both employer and employee. Gamification might help.

There is a caveat to this, though. While games can suck you in, you can get bored with them too – and you usually do. Even the most compelling computer game gets replaced by something new by almost all players. But getting a new job isn’t as easy as buying a new game. If all gamification does is create a short lived buzz of engagement which is inevitably followed by ennui, then it may be a lot less positive a force in the world of work.

This leads to last observation: people will work out what gamification does. When that happens, many of them will cease to be fooled. Not only will they see through a particular example of gamification, and likely resent being manipulated; they will see through a lot of different example of it. In short, they will become immune to the technique. Gamification is interesting, especially as seen as a kind of psychology experiment on work. But before using it, the experiments should consider what happens when the rats work out what is going on.

When delegation fails May 31, 2012 at 7:21 am

When you pay someone else to do something for you, the problem of what you are paying for, exactly, sometimes comes up. This is one of the issues with PFI, for instance; it seems to be essentially impossible to write a contract that defines what it means to run a hospital or a railway properly. So either the taxpayer ends up paying far too much to the outsource provider for contractual variations, or they don’t get the service they thought they were buying. Or both.

It is interesting to read that the same issue occurs in IT outsourcing. The Guardian has a series of interviews with various (anonymous) individuals from different areas of banking. This weeks’ is from an IT sales guy. He says:

Years ago management in major banks and corporations decided that they could outsource vital IT functions to companies such as IBM, Tata, HP and Atos Origin T-Systems. The idea was that if you describe the processes you require adequately, it’s safe to delegate their execution to outsiders. But the first contract goes to IBM, two years later a contract for another part of the infrastructure is awarded to HP, then Cisco gets to manage the network … Now, who is responsible for the overall system?

Indeed. Combine that with in-house IT that sees their prestige rise the more they spend, and you have very unhelpful incentives for the wider organization. You can’t tell the outsourcer what you want because you don’t understand it yourself, and anyway your requirements change regularly. You can’t upgrade easily because of all the interdependencies within systems. And the people you are paying to manage all of this have little incentive to reduce complexity because that would involve smaller budgets and fewer staff, and budget is power in many firms. Now of course a good CIO could fix this but honestly, can you name a CIO who really understand their firm’s systems at a deep level? As the sales guy goes on:

[CIOs] are managers, skilled in office politics, not technical experts. Most CIOs rarely stay in their post more than a few years. I worked for one of the major software companies in the world. It took my boss a year and a half of begging and pleading with the secretary to get a meeting with the CIO of a major client. CEOs are worse. They are afraid of looking stupid or ignorant, and actively avoid their IT people.

That phrase ‘actively avoid their IT people’ rings all too true to me.

The old and the new February 26, 2012 at 12:57 pm

Back to finance tomorrow I promise but meanwhile… I love the fact that someone is thinking of using Kickstarter to fund a new translation of Antigone. OK, she might not have a novel take on the Tycho von Wilamowitz-Moellendorff problem (you knew I had to hit up the big T W-M, right?), but using a four year old technology to raise money to work on a 2,500 year old book is a lovely juxtaposition.

The capitalist network October 25, 2011 at 11:26 am

Sorry, I am suffering from post backlog. Here’s something I have been meaning to get to for a little while.

An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.

Now I don’t entirely agree with the methodology but that does not matter too much: the important part is that the authors have tried to ‘empirically identify such a network of power’. Chapeau.

In praise of appropriate technology September 8, 2011 at 9:11 am

As complicated as it needs to be

I work in a building with a lot of different wifi networks. There at least 30 visible from my laptop, and that is before you include cordless phones and other interference. So I’ve been getting drop outs: there are simply not enough channels to go around.

My guess is that this will be an increasingly common phenomenon in cities. The current wifi standard, 802.11n, supports 14 channels, and that just isn’t enough for dense urban situations. Now there are things that you can do, like using a modern dual band multi-stream router or boosting your signal, but if everyone does that – or even if enough people do – you are back to square one. So what can you do that is reasonably future proof?

The answer isn’t pretty, cosmetically, but it is cheap and effective. I spent £3.46, including postage, on a 6 meter ethernet cable and went back to 1980s technology: wired access. It’s much more secure, it’s faster, and there are no drop out issues. Sometimes the right answer is to move backwards, not forwards.

Work complete July 23, 2011 at 6:36 pm

Our work here is done

The good news is that Deus Ex has been upgraded to the latest verison of WordPress, the spam filter has been improved, and the site should be both faster and (even) more secure going forward. The bad news is that the WordPress upgrade works well but does not terminate successfully, so I sat for an hour waiting for it to stop saying ‘updating’ while it was perfectly happy to carry on saying that forever. And that, dear reader, may well be a good lesson for life – as we are indeed always updating – but it is a bloody irritating thing for a piece of software to do…

Polymorphic currency February 13, 2011 at 8:20 am

When I first learned to program, in the era of Fortran IV, types were important. Something was either a character or an integer, a real number or a matrix. It was what it was, and that was that*. Pascal was even more Victorian about keeping different things different.

Years later the wonders of polymorpic languages became common. Here things could change. They could be what you wanted them to be, when you wanted them to be it. Miranda was pretty liberal, for instance (not least because she didn’t come from Edinburgh), and things got even more splendid after that.

What’s all this got to do with finance?

Well, recently there has been a push to replace the dollar as a reserve currency with special drawing rights. SDRs are pretty close to being a polymorphic currency. An SDR represents a claim at a central bank – you can ask them for money – but you can trade it for another claim at a different central bank. It would not be much of a step to simply have all central banks required to honour SDRs. (OK, then they could not control the amount of currency that they might be required to issue, but SDRs are a tiny tiny fraction of total money supply, and they could always sterilise any SDR-related issuance.)

Just to indulge in a flight of fancy, I’d love an SDR-based credit card. A bill in Euros? Certainly, take this. In dollars? Not a problem. Kazakhstani Tenge? Fine. There would be no troublesome FX to worry about, no rip-roaring fees for daring to be in a different country. Oh, and first dibs on setting up the SDR swaps market please.

* Well, apart from EQUIVALENCE. I vividly remember having to use some rather opaque code involving common blocks and equivalence to communicate with a 1960s vintage flat bed plotter…

Jessica Alba naked and other news from the repo market November 13, 2010 at 2:49 pm

The Market is a Mess

I use a tool to detect and remove spam comments. It’s pretty good. Occasionally I look through the list and check that the comments are indeed spam before deleting them forever. You can get a real sense of the internet economy from them too – fake designer sunglasses in summer, fake Ugg boots in winter and viagra all year round. But somehow a site advertising pictures of Jessica Alba naked linking to a post on repo market margin: that just made me laugh. It’s a wonderful market out there folks, it really is.

Sand and fat fingers May 8, 2010 at 10:52 am

I attended a meeting last week at which a doctrinaire free markets economist was praising the benefits of a market in markets. The theory was that lots of different markets in the same asset would somehow give better liquidity, cheaper trading and hence better price transparency. What tosh.

No, what we see instead is that a diversity of markets produces poor liquidity, gappy markets, and just occasionally, near disaster. That seems to have happened on the 6th of May, when a market fall erased a trillion dollars in value in what Bloomberg dubs a ‘flash crash’. The WSJ account is here. It seems that a ‘fat finger’ trade, i.e. a mistaken transaction where perhaps a trader executed billions rather than the intended millions set off the wave. What happened then, it seems, is that waves of automated trading intensified the problem. Some of the smaller dark pools – alternative markets – were overwhelmed by the orders placed and became disorderly.

As Rajiv Sethi says, this is a recipe for disaster: computer-driven trading executed in milliseconds, poor liquidity, and no automatic trading stops make for instability.

The Bloomberg article above then says

One SEC memo, according to people who saw it, discusses a theory raised yesterday by NYSE Euronext spokesman Ray Pellecchia, who said sudden price moves in multiple stocks reached so-called liquidity replenishment points. That prompted the exchange to slow trading in those shares as it tried to ensure an orderly market. Such incidences allow other exchanges to ignore NYSE price quotes.

Trades sent to electronic networks then fueled the drop, said Larry Leibowitz, chief operating officer of NYSE Euronext. While the first half of the Dow Jones Industrial Average’s 998.5-point plunge probably reflected normal trading, the decline snowballed as orders went to venues lacking liquidity to match them, he said in an interview yesterday…

NYSE competitors such as Nasdaq OMX Group Inc. don’t use liquidity replenishment points. The SEC and CFTC in their joint statement raised concerns that the plunge may have been caused by exchanges not adhering to uniform practices.

“We are scrutinizing the extent to which disparate trading conventions and rules across markets may have contributed to the spike in volatility,” the regulators said.

No wonder. It is time to end this market in markets, and to throw some sand in the cogs of the algos. If every trade executed in the same, say, five second interval got the same price, instability would be greatly reduced, yet ordinary investors would not notice the effect. And if every trade were executed on the NYSE, or at least using the same market conventions, then officials could actually stop everything when things get out of hand.

Update. Here’s the letter from Senators Ted Kaufman and Mark Warner asking the SEC and CFTC to investigate the events of the 6th. The joint SEC/CFTC ‘we are looking at it’ letter is here.

A very readable and plausible account from a sell side analyst is here. I’m going to quote it at length as it deserves the widest possible dissemination:

I’ve got 28 pages in front of me of P&G prints [individual trades in Procter and Gamble] that occurred between $39 and $50 per share and between 2:46 p.m. and 2:51 p.m. At 36 prints per page, that means P&G traded over one thousand times at those “crazy” and “surely erroneous” levels. I’m sorry, but that isn’t an error, THAT IS WHAT WE LIKE TO CALL TRADING. So what happened here? Three things:

  1. Sellers probably had orders in algorithms – percentage-of-volume strategies most likely, maybe VWAP – and could not cancel, could not “get an out.” These sellers could be really “quanty” types, or high freqs, or they could be vanilla buy side accounts. It really doesn’t matter. The issue here is that the trader did not anticipate such a sharp price move and did not put a limit on the order. The fact that the technology may have failed does not mean the trader deserves a do-over, it means that the trader and the broker who provided the algorithm need to decide whether any losses should be split.
  2. Sell stop orders were triggered which forced market sell orders into an already well offered market.
  3. While the market was well offered, it was not well bid. Liquidity disappeared. For example, in P&G, 200 shares traded at $44.10 at 2:51:04 in the afternoon and one second later, at 2:51:05, three hundred shares traded at $47.08. That’s a three dollar jump in one second. Bids disappeared, spreads blew out, and no one was trading except a handful of orphaned algo orders, stop sell orders, and maybe a few opportunists who had loaded up the order book with low ball bids (“just in case”). High frequency accounts and electronic market makers were, by all accounts, nowhere to be found.

It boils down to this: this episode exposed structural flaws in how a trade is implemented (think orphaned algo orders) and it exposed the danger of leaving market making up to a network of entities with no mandate to ensure the smooth and orderly functioning of the market (think of the electronic market makers and high freqs who can pull bids instantaneously as opposed to a specialist on the floor who has a clearly defined mandate to provide liquidity).

The importance of aligning revenue and product March 18, 2010 at 5:51 am

As a palate cleanser between stodgy doses of regulation and financial disaster tourism, I’d like to reference an interesting criticism I read recently of the advertiser led business model — Google’s business model.

The article is EMC vs. Google: There Should be No Competition by Rob Enderle on IT business edge. For me the key point is this one:

It isn’t really clear who Google’s customer actually is. Advertisers pay the bills …, but most of the products are focused on providing services to others… Harry Potter doesn’t fly in and create an enterprise offering. Someone is paying the bill to create it and that someone is going to want value. When you decouple revenue from the product, it becomes very difficult, and you can see this with Google, to stay focused on quality and customer satisfaction.

In one sense Google is a hugely successful company. It makes adverising-led free (or cheap) products work. But this is at a significant cost for the user in privacy and security. You get what you pay for. In the retail space, Google’s focus on selling advertising and providing an insecure but free service might make sense for some. But in the enterprise space, the argument is much less clear. Corporate IT might be moving to the cloud, but I doubt the Google model will have that much success there. At the end of the day, companies understand that if you are not the customer, the product is not designed for you, and it probably won’t meet your needs.

If you have a wordpress blog… January 16, 2010 at 7:48 am

… and you get spam comments (mine were primarily from Russia), then you might want to add an .htaccess file. The skinny is here.

Safety and the little guy October 1, 2009 at 2:16 pm

I went to a talk given by a man from Symantec, the publishers of the Norton computer ‘security’ products last night. It suggested that their business model is broken, and that their solution is both likely to have many more false positives – files they suggest are malware but aren’t – and to make it much harder for small houses to distribute software.

Let me explain. At one end of the spectrum the security providers are good at blacklisting large infections. If a worm or virus infects millions of computers, they find it, discover what makes it unique, and update their program to stop the malware getting any further. At the other end, big software publishers have their programs white listed, so for instance the latest update to Office does not trigger a malware alert.

A safe place?

The bad guys have responded to this by producing malware in small amounts. These are not virulent enough to trigger black listing, and there are anyway too many of the to catch.

Norton’s utterly misguided response is to gather information about what their clients are doing, and thus to be able to signal when a program is sufficiently new that it might be dangerous. There are, they claim, other criteria than newness and number of installations which are used to decide if something is dangerous, but the basic issue is obvious: Norton is going to decide whether something is malware not based on an analysis of the program, but simply on general criteria such as how many other people have installed it. For legitimate programs with a small user base, that is going to be a problem. I even wonder if it is legal: it feels like a restraint of trade to me (but of course I am not a lawyer).

In any event, it is yet another case of one large corporation – Symantec – protecting others, and leaving the little guy out in the cold. Given that there are perfectly good no cost anti virus and firewall programs out there, why would anyone install something that only told them software was safe if it passed arbitrary tests that had little to do with its actual safety?

Leaving google September 27, 2009 at 12:35 pm

Rainy Vegas

Used to be I could drive up to
Barstow for the night
Find some crossroad trucker
To demonstrate his might
But these days it seems
Nowhere is far enough away
So I’m leaving Las Vegas today

I don’t think I have ever been to Barstow – and my acquaintance with truckers is slight – but I do see Sheryl’s point. There is a time when you have to let the tacky indiscretions of your youth go. And having a blog hosted at blogger is certainly a tacky indiscretion.

The main reason is google’s attitude to privacy. It seems from the actions, if not their words, that they do not believe in it. For instance, you cannot even sign in to blogger without enabling cookies from google.com, thus opening up all of your searching (and potentially much more) to google’s scrutiny. Or the fact that blogger stores the id of ‘anonymous’ comments, making them discoverable to anyone who knows how to query the database properly. Then there was the time that blogger decided that my blog was spam and suspended my access for a couple of days. No, if you don’t want to feel like you have just let a large American corporation demonstrate their might, it is time to leave blogger.