PSA: Your Verizon.net email address will stop working

If you do NOT have a verizon.net address, then don’t gloat. Check your inbox for friends & family who use it, and give them this info.

If you DO, then either (1) Verizon has cruelly and rudely informed you, with short notice, that they will to end your verizon.net email service, or (2) Verizon will do so “in the coming weeks.” But you can save your email address.

You must wait to receive the notice (in email and/or when you login to Verizon webmail), but you must act quickly, within the deadline you are given. They promise 30 days, but some people got six days notice.

Context: Verizon is ending email service, as obnoxiously as possible, because (1) they now own AOL, (2) they don’t want to do any avoidable work, and (3) they are thoughtfully reminding us of the historic inability of the telecommunications sector to deliver a user experience that isn’t horrific. If you don’t act, your verizon.net address will stop working. I wrote this because I am close a few Verizon victims.

The good news: You can preserve your verizon.net email address.  (And you want to do so.) Even if you’re not using it actively, if you ever did use that address there surely are people you care about who never entered your newer email into their address books.  It’s worth the few minutes to have it preserved as yours, forever, including after you drop Verizon entirely.

To get it done:  Continue reading “PSA: Your Verizon.net email address will stop working”

Which reality: Are VR & AR over-hyped? Or inevitable and transformational?

[This analytical essay was published on Digital Media Wire.
Below is the pre-publication draft.]

Even if you’re not certain what’s meant by the buzzwords “Virtual Reality” and “Augmented Reality”, you have surely heard their growing buzz.  This year’s Game Developers Conference includes a two-day VR Developers Conference, and GDC’s Expo will feature at least 4 VR headsets and 70 VR games. While the game industry is consistently an early adopter of new interactive technologies, VR is already a multi-media phenomenon: a “virtual reality experience within Amazon Video” is in development, film festivals are featuring VR movies, and there’s a VR broadcast of the Coachella Music Festival. VR & AR are enjoying rapt attention from both industry press  and general news media.

The new display technologies are also getting financial attention. Facebook’s startling 2014 acquisition of VR developer Oculus was big news, based on the huge price: $2 billion. Another 120 VR deals in 2015 drew another $632 million from dozens of firms and funds, and inspired the creation of VR/AR-specific funds and incubators.

Will this excitement inevitably drive the proliferation of VR & AR experiences? Or will it bring VR & AR to an early Peak of Inflated Expectations, followed by a descent into a deep Trough of Disillusionment? (To borrow the fantasy-fictional jargon of the autological Gartner Hype Cycle.) Will VR & AR become as ubiquitous as touchscreen displays? Or are the ballooning expectations dangerously over-inflated? To all these questions, the answer is “Yes, but relax about it.”

Despite all the talk about VR & AR, the words themselves are vaguely defined. For many years, VR consistently referred to “an exciting rendering technology, where I cannot afford the peripheral.” In 1980, this included the simplest possible real-time 3D rendering technology, but the rapid proliferation of personal computers in the next few years brought real-time 3D to the desktops of the masses, and real-time 3D on a flat screen was no longer considered “VR”.  In the early ‘90s, haptic feedback was VR technology, but Microsoft’s 1997 introductions of affordable force-feedback joysticks and steering wheel were welcomed as game accessories, not “VR devices”.

In other words, until recently VR was an aspirational buzzword; it referred to technologies that were not yet ready for widespread consumer distribution. Another aspirational buzzword is Artificial Intelligence, which essentially refers to decision-making or semantic-modeling technologies that are not yet fully feasible. Out of AI have fledged such important technologies as predictive analytics, voice interfaces, and robotic vacuum cleaners, all of which are no longer thought of as AI technologies. Similarly, Big Data denotes datasets whose analysis is not fully feasible, and Home Automation covers exactly those systems that are unwelcome in my house: a generic 7-day programmable thermostat is not an example of Home Automation, but the Nest Learning Thermostat certainly is. The Nest autonomously downloaded defective software this January, abruptly shutting off heat for many homeowners, amidst record-setting cold.

For some of us, VR continues to denote rich, multimodal, real-time simulations of reality. But today the phrase predominantly refers to any one of a number of headmounted displays that include stereoscopic video output, while reading sensors that indicate position, movement, and perhaps location. In other worlds, today VR usually means “a bucket over your head, with a video projector inside, and a lot of sensors.”

Augmented Reality, in the usual modern sense, refers to that same VR bucket, except AR’s bucket has transparency. This allows the user to interact with the real world, overlaid with fully-reactive computer imagery.  That distinction accounts for the many game designers and computer scientists who are unimpressed with VR, and deeply enthusiastic about the potential for AR. As one computer scientist notes, AR “has the entire world and much of human experience as raw material to be augmented,” into which it can introduce virtual objects or relevant information, by stark contrast to the fully-immersive VR experience. While enjoying a VR (in the newer, headset-wearing sense of VR) experience, it’s unwise to get out of your seat, let alone walk around and interact with your environment; you are fully blind to the world. In an AR experience, you might see your hands, as they create magical items, or mundane craftworks. And as AR systems learn to map and model the environment around them, your virtual creations could be place on your actual living-room mantel, for the viewing pleasure of anyone who shares that AR-enhanced view.

Given the overlap in VR & AR technologies, particularly in terms of sensor-enhanced head-mounted display technologies, these distinct concepts are often lumped together, as in this article. Or one might be used to encompass the other, as in GDC’s Virtual Developers Conference, which targets creators of “immersive VR (and AR) experiences”.  Google’s Noah Falstein made a brave and reasoned effort to posit “Transmogrified Reality”, to include AR & VR, while highlighting the power of their effects.

As with VR, the meaning of AR has evolved. It historically has referred to any computerized output that overlays the real world. For example, the “3D Compass” app has long been a simple, if useful, example of AR. The app superimposes a compass display atop the real world (as displayed on a smartphone screen, via the phone’s camera), while showing an oriented map in half the screen. As with VR, that older sense of AR remains at large, and includes augmentations in the form of sound or text, but the usual usage refers to head-mounted video or graphical augmentation.

Finally, AR as a buzzword has shared an aspirational quality with VR; a fighter-pilot’s heads-up display, with essential data projected onto the windscreen, was recognized widely as “AR”. But as the same HUD appears in consumer automobiles, the driver accepts it as simply how she gets to see her speed, or route, without having to take her eyes off the road. If that HUD is referred to as “AR”, the speaker is probably a marketing professional.

Although AR might have the widest range of application, bringing data-display into interactions with an airplane’s wiring harness, or with a surgical patient’s peripheral arteries, VR, too, has applicability beyond games, or video entertainment. Game designer and author Raph Koster wrote an early analysis of Facebook’s $2 billion Oculus acquisition that underscored the importance to social interaction of presence, that quality of rich interactive connection that will ensure eternal demand for physical college campuses in the face of rapidly-improving online courseware, and for physical conferences in the face of online-collaboration technologies. Facebook’s core business remains one of human connection. The potential for VR to enrich that connection logically motivates the Oculus deal, even if the nature of VR-based social-networking interaction remains unclear.

Indeed many details of the future of VR & AR experiences are unclear, even if the potential is compelling. Competing VR displays now span a range from Google Cardboard, which was distributed free to New York Times subscribers, to Microsoft’s $3000 HoloLens Development Edition. Somewhere in that spectrum is a threshold of “good enough” for broad consumer demand, for any given application of the technology. Elsewhere in that spectrum is the corresponding threshold of “cheap enough” for that demand to be satisfied. As these thresholds converge, the promises of VR & AR could be realized.

There’s just one problem: when it comes to entertainment media, technology can be the easy part. It can take years for content creators to find the right application of a new technology, and to design the content that takes advantage of it. During that time, a medium can be “huge, just two or three years from now!” for over ten years.

That first compelling application that paves the way for an entire platform is the original sense of “killer app”: the original spreadsheet program, VisiCalc, released in 1979, drove the success of the Apple II computer and motivated IBM’s release of the PC. Similarly, a single game title can reveal the potential for an entire genre of experiences. (This applies to the extent that I define game genre as “a hit game, and its imitators”.) Each VR-device manufacturer is seeking its own killer app, which will probably be an entertainment experience. AR’s wider applicability might lead to its success emerging from a wider range of genre-defining experiences in various industries or content categories.

While new interactive technologies can make many new experiences possible, not all of them are appropriate. Touchscreens, once a rare and exciting technology, are becoming commonplace, appearing everywhere. Unfortunately, this includes touchscreens serving as the main systems-interface in automobiles, replacing the knobs and dials that had allowed drivers to keep their eyes on the road.

When real-time 3D animation was new, there was similarly ill-conceived over-application of that technology. In the mid 1990’s, retailers were excited about bringing the sales process online, which history has shown to be a wise impulse. But many of them sought to do so with a VR experience, which at that time meant a 3D-animated simulation of a real-world in-store shopping experience. The results were very high-tech, and visually exciting (for its day), but also an efficient means to bring all the inconvenience and frustration of real-world shopping to the otherwise-efficient online store.

Another pathology of new media is often “shovelware”, the careless, hurried redeployment of the previous medium’s content onto the new one. When CDs were new, “multimedia” content became the rage: encyclopedias, textbooks, courseware, and games were all compelled to appear on optical media, with music, animations, video, and whatever else would exploit the new technology. This did not last as a medium in its own right. But the integration of sound and images with a broad range of content did become commonplace, even while “multimedia” became a term of derision. (“I survived the multimedia scare of 1993.”) The technology succeeded, even as it disappeared as a product category.

The multimedia era showed that the success of a technology need not correlate with the success of the innovators in that technology. Multimedia was not kind to its parents. Similarly, even as the personal computer industry grew dramatically, bringing computers into every household, and later onto every desk, the PC manufacturers suffered.

The fact that Internet Service Providers prospered in step with the growth in Internet access reflects their monopoly position, granted by municipalities in the 1980’s when Community Access Television (aka CATV) was seen as an important public good, to enable access to the broadcast (over the air) television signals. The thousands of cable companies that became today’s Comcast (and its very few competitors) were each local monopolists. That was an unusual trick of history; today’s innovators would do better to heed the warnings from multimedia, from personal-computer manufacturing, and from the various console-game platforms that failed to build a roster of compelling proprietary content.

VR & AR offer an inherent value that has led investors, manufacturers, and content developers to a shared confidence in its future. This distinguishes VR & AR from 3D television. 3DTV was driven by television manufacturers, who were desperate to find arguments for consumers to replace their perfectly good large flat-screen TVs. The content industry experimented in the medium, and turned away. At best, a 3DTV production could hope to resemble a 3D movie: an incremental enhancement to an already well-defined experience that remains fundamentally unchanged. And an enhancement delivered through burdensome production costs, with mixed results.

With time, the creative balances are found, and the truly valuable technologies become prevalent, even ubiquitous, exactly while they become unremarkable. A cynic might snark at the way an “aspirational buzzword” such as AI might apply only to those technologies that are not clearly feasible, but the value of a field such as AI is proven by the wide range of its alumni. The success of VR and AR similarly will be proven by the casual acceptance, to the point of disregard, with which consumers will greet the most engrossing entertainment platform, or the most enriching workplace knowledge base.

 

Dan Scherlis is an executive producer of health games, including the NIH-funded BreatheFree smoking-cessation intervention. Dan was founding Content Director of Comverse Mobile Games. At Turbine, he was CEO, and Producer of the Asheron’s Call MMO.

 

Game Developers Conference 2016 launches today, with inaugural VR Developers Conference

[Below is a pre-publication draft of an item that will appear later today on Digital Media Wire. The below will then be replaced by an excerpt of the final version, and a link. This piece is basically a frame for my longer analysis & opinion on VR & AR.]

The 30th Game Developers Conference today begins its week-long occupation of San Francisco’s Moscone Center. In addition to the usual collection of one- and two-day “summits” that precede the core Wednesday-Friday conference, this year’s GDC includes a new two-day program. “The Virtual Reality Developers Conference (VRDC) is a new event for creators of amazing, immersive VR (and AR) experiences.”

GDC’s promotion of the VRDC, and the event’s “new conference” status, reflects the fascination with VR & AR that is widespread, but perhaps deepest in the games industry.

The new VRDC includes two tracks. A “Game VR/AR Track” for game developers, and an “Entertainment VR/AR Track” for “multiple industries including filmmaking, travel, retail, fitness, product design, journalism, and sports.”

For the GDC to devote a track to non-game content would be consistent with a transitional status for VRDC, co-located with GDC until it proves itself capable of independent flight.

And VRDC is off to a strong start: VRDC-specific tickets are sold out.

 

Dan Scherlis is an executive producer of health games, including the NIH-funded BreatheFree smoking-cessation intervention. Dan was founding Content Director of Comverse Mobile Games. At Turbine, he was CEO, and Producer of the Asheron’s Call MMO.

 

I’m a Health Games Guy, These Days

I’m writing this as I arrive at the Game Developers Conference. For me, this is an annual reunion with some people I admire, respect, and enjoy. (I also hope to go to some sessions.) As happens with our annual milestones, I instinctively compare myself to my last-year iteration. I’ve a different business card and self-identity. And I’m part of three projects and teams that I enjoy:

I’m starting with a personal note, but I’ve some thoughts on a new medium:  During the last year, I’ve happily transitioned from “game executive who’s looking into different areas” into an enthusiastic “health games executive producer”.  I had been advising a couple projects, and as they gained momentum, I gained insight into the peculiar needs and opportunities of this space.  It reminds me of the first years of we later called massively-multiplayer games: it’s the frontier. Me, and my fellow expatriates from traditional games, don’t yet agree on the best creative approaches or business models, but we share a confidence that this stuff will work. I mean: These can work out nicely for the companies deploying these games, and can work for the people playing these games.  (Our players, or should I say “patients”? Or maybe “customers”? During our testing they are “subjects”. But I suggest we avoid the game-industry’s “users”, shall we?)

And, as with MMOs, we’re grappling with a new context that makes new demands. The only reason for health games to exist, indeed the only motivation that justifies developing any “serious game”, is the opportunity to provide superior results from a clinical, behavioral, or educational perspective. I don’t remember the word “efficacy” being uttered ever, let alone regularly, in traditional-game product-planning meetings. I call myself an executive producer, which means I am likely to identify and contract the development team, to ensure a convergence between an engaging game design and an efficacious intervention strategy, and to manage and support the funder/developer relationship. As E.P., I am certainly focused on delivering a successful product, and on forming the partnerships or relationships necessary to success. For my current projects, “success” mean revenues and commercial leadership.

Heath games have not included very many commercial successes, with important exceptions in a couple sectors. Specifically: fitness, and mind-training or “brain games”. I think there are reasons for the limited successes: Few health games have started from a clear understanding of why a *game* should be the best delivery mechanism. Few well-motivated projects include experienced, proven game designers, without which any game is unlikely to be fun. And few of these are conceived and initiated with a clear understanding of how they will go to market, and of who will pay for them, and of why the payors should be expected to do so.

The odds appear to be long, which is only a problem if you are making a fair bet on a level playing field.  I don’t play roulette. I will happily enter any contest with a rich, long-shot-style, payout, but only if I’m playing with a team of ringers.

My column for DMW: Don’t clone my indie game, bro


Soon after arriving at this year’s Game Developers Conference (GDC) I was struck by the complaints — both in conversations and in rant-style conference sessions — about a rampant and increasingly practice of large game companies ripping off the work of smaller, independent developers.

When I spotted a clever little badge ribbon, one that clearly was not authorized by conference management, I wrote this column for Digital Media Wire.

Panel at Boston Post Mortem: Analytics & Metrics

I’ve assembled a panel for tomorrow night’s regular monthly meeting of Boston Post Mortem, aka the Boston Chapter of the IGDA (International Game Developers Association).  I’ve a business trip, so I’ll miss the session.  That’s a shame, because the panelists bring a wide range of perspectives on the use of analytics and metrics for game development:

I do enjoy putting together a panel.  But I also enjoy moderating, as well.  But, aside from my being out of town, Darius is flat-out better-qualified for this one.  Plus, I’ve been working for Sonamine, and thus didn’t really belong up there as his moderator.

Panel at Harvard: Evolutionary Biology Looks at Videogames (Who Plays Games and Why)

[Update: Added more links based on our discussion. More will follow this weekend.]

For a few years now, I’ve wanted to get a game designer (or two) into a serious discussion with an evolutionary behavioral biologist (or two).  Obviously we find games — specifically videogames —  fun,compelling, and sometimes badly addictive. But just what is it about those activities that is so rewarding?

I’ve finally rounded up the venue, the right scientists (Harvard’s Richard Wrangham and his colleague Joyce Benenson of Emmanuel College), and a couple esteemed colleagues (Kent and Noah). We’re on!

The event is Wednesday night.  It’s at Harvard, and walk-ins are welcome.  Below are the details for the event, from the Harvard page, and links to some supplementary materials.  I fully expect to add more links, based on our discussion.

I can’t resist noting: as I type this, there are no google hits for “evolutionary ludology.”  Here’s the vitals for the event:

Who Plays Games and Why: Evolutionary Biology Looks at Videogames

A discussion with Harvard Human Evolutionary Biology Professor Richard Wrangham, Emmanuel College Psychology Professor Joyce Benenson, and game developers Noah Falstein and Kent Quirk.

Wednesday, June 2, 2010.   5:30 -7:30 p.m. (registration begins at 5:00 p.m.)

Location: Harvard Science Center, One Oxford Street, Cambridge

Electronic games are competing with television for that essential resource: consumer attention.  But exactly who is playing these games? And what is their appeal? Indeed, why do people find games “fun” at all, from simple board games to immersive 3D fantasy worlds? Is there a biological reason that males and females play dramatically different kinds of games?

The many genres and formats of games will be surveyed in a brief multimedia overview, with a look at the different populations that play these different games. Then, human-behavioral scientists will collaborate with game-design professionals to explore the biological roots of our attraction to these experiences.

Please join this discussion, with:

Alumni and friends of the Harvard community: $10.    Undergraduate Students: complimentary

Supplementary materials for this session:

Articles and other online resources, general background:

Items mentioned during the discussion: [more to follow]

Books mentioned during the session: [more to follow when I can review the session’s recording]

  • Bowling Alone, by Harvard’s Robert Putnam, shows the decline in America’s “Social Capital” — by many measures — over recent decades. (I think this decline motivates our hunger for social engagement via online games, social media, etc.)
  • What Video Games Have to Teach Us About Learning and Literacy (2007) by James Paul Gee.  His short opinion piece in Wired speaks to educators and to game designers.
  • Rainbow’s End, a novel byVernor Vinge. (Recommended by Noah and Kent as a vision of augmented reality.)
  • Snow Crash, a novel by Neil Stephenson. (Mandatory reading for social-media industry participants. An early vision of virtual reality, with insight into our relationships with our avatars.)

Tomorrow: Speaking at Harvard Business School’s Cyberposium

Hey! I have a blog. ( I wonder if this thing still works?)  I’ll (re-)start with a personal note: I’m moderating a panel tomorrow (Saturday) at Harvard Business School’s Cyberposium 15 conference.

I’m delighted with our session’s focus: Where Gaming and Social Identity Collide. We’ll look at social games (and what I still call “community-based games”), how they overlap with other social media, and the implications for other industries. The panelists are a great balance, bringing backgrounds in product-development, academic, marketing, publishing (digital and old-school), and creative.

Cyberposium draws an interesting mix of industry and finance executives, along with the predictable MBA-student crowd. The conference’s annual themes have addressed different aspects of digital (generally Internet) technology.

I’ll use this space to publish some links that we wind up promising the crowd.  That will surely include some industry references and news sources, especially for social games.

Linguistically, I’ll acknowledge that the name does have a distinctly mid-1990’s ring to it. That’s only fair: this is Cyberposium 15, after all; it started in 1995. But “cyber” does seem increasingly marked, if only to judge by the increasingly snarky reactions it seems to draw.  That said, it remains productive. Arnold Zwicky, in a recent roundup of portmanteau words, cited cyberchrondria, and cyberteria. I’ve no idea how he missed cyberposium.

Adding: Of course I was joking.  There’s no reason why any linguist, not to mention a Stanford linguist, would be aware of a small (if excellent) high-tech conference at Harvard’s Business School.

And by the way, anyone curious of language should enjoy this favorite of mine: Prof. Zwicky’s 1980 booklet, Mistakes. Although intended for his linguistics class, the assumptions it makes about your preparation are, as he says, “modest.”  And how many academic notes draw their examples from Grouch, cummings, Perlman, and railway graffiti?  (I’m using note in the HBS sense:  a supplementary teaching document that might run to 40 or 60 pages.)

Celebrity Calamity: game that actually teaches financial literacy

Today sees the release of Celebrity Calamity, a browser-based game that has already been shown to improve financial-literacy skills. The game comes from the Doorways to Dreams Fund (D2D), and is inspired by the research of D2D’s founder, Harvard Business School Professor Peter Tufano. D2D plans other games to target an endemic lack of financial skills and knowledge, particularly among low-income single mothers.

Here’s the best part: it seems to work. Preliminary testing results by D2D show:

  • financial skills & confidence up 15% to 30%
  • financial knowledge up 55% to 70%

I’m pleased to have had a small role in the Celebrity Calamity team. At the request of Prof. Tufano and D2D’s Nick Maynard, I assembled a few local game designers into a small brainstorming group.  Nick and I had hoped to conclude with a few high concepts and general principles, but the team exceeded all our hopes, and quickly converged on a core vision. After a huge amount of work by Nick and his development teams: it’s a game! From that initial brainstorming team, Jason Booth stayed with the project as advisor and designer.

Celebrity Calamity got a write-up by Anya Kamenetz on Fast Company’s blog. You can see the press release, or view the trailer on Youtube, or check out interviews with the test users.

Nerdly sub-cultures and their humor

One joy of the internet is that, no matter how narrow your niche, you can surely find blogs to support it, comics to self-parody it, and communities to squabble about it.  These examples crossed my desk (er, desktop) this morning:

(1)  For philosophy nerds: Advanced Dungeons & Discourse

Bayesian Empirimancy: prior spell-efficacy

The rewarding Mind Hacks blog highlights this philosophy-themed D&D role-playing quest.

And there’s the original Dungeons & Discourse, also by Dresden Codak.  (The 8th-level positivist is immune to metaphysics, but has low charisma.)

(2) For language nerds: worst pun ever, with analysis

My own favorite guilty nerdly pleasure, Language Log, reports this appalling pun (an 18-second video). The pun is ‘good’, but it’s the comments below that got my attention, rife with linguistic-style categorization-squabbles, with duly-offered comparables and counterexamples.  (That said, Karen is right: it’s not a mondegreen; it’s not like “Mots d’Heures: Gousses, Rames.” And I’m always happy to see a Hendrix reference in any thread.)

Of course, I can’t mention nerdly humor without this modern classic:

(3) For comp-sci/math nerds: XKCD

If you’re this kind of nerd, and you didn’t yet know about XKCD, well, then, you’re welcome. This recent favorite captures the full XKCD mandate of “romance, sarcasm, math, and language.”

The culture of the XKCD forums (excuse me: fora)  are worthy of their own examination.  Later.   The various emergent behaviors include a variety of forum games.

Metrics: “Online Games is best performing game sector index at -29%”

Yes, thirty-percent down is the new up.

I’ll rarely focus on market values, but since I just posted on relative new-media/old-media values I’m quoting (in the title) this month’s Video Game Briefing (PDF) from Paul Heydon at Avista Partners. (You can subscribe.)

Online games (down 29% from Jan 2008 ) narrowly led PC/Console games (-30%) and distributors/accessories (down 34%).  They all outperformed the S&P 500 (-38%), well ahead of retailers and mobile games, (both down about 50%). The low point was Nov 20, ’08, but not by much.

The report also shows regional performance, ranked as you would expect (Asia, S&P, U.S., Europe) but perhaps spread more-broadly than you expect.  And there are many details on M&A and on equity raised. In total:

  • Over $1.8 billion of M&A deals in global sector (LTM)
  • Over $1.6 billion raised in global sector (LTM)>

Metrics: New media’s mkt value up 102%. Old media lost 32% (’05-’08).

The communications, media, and technology (CMT) sector lost 47% of its market value in 2008, worst than most markets overall.  An Oliver Wyman press release, summarizing their 2009 State of the Industry Report (PDF) notes that within that sector, for the the 5-year period:

Traditional media — including media agencies, publishing, and broadcast and entertainment — lost 32% of its market value, or $137 billion, while new media (online content and services) gained 102% or $58 billion.

The top performer in the media segment was China’s Tencent, with a market value of $11.6B.

(The above quote is from the press release.  If you can find that data in the full report — or other analysis of the new-media-subsegment —  then I owe you serious respect.)

The report does discuss sector-specific strategies.  Strategic recommendations include strong focus on emerging markets and on broadening corporate scope, such as broadening from distribution to content. (More on this detail in a later post.)

In support of comScore’s assertion (which I argued against) that free-online game growth comes at the expense of paid content, note that 18% of US consumers “expect to spend significantly less” on “a la carte content purchases (including movie tickets, … downloads, games, etc.)” And 19% expect to spend “a little less.” Only 10% see spending more.  (Oliver Wyman’s November survey: Exh. 8, p. 13, of the full PDF report.)

Metrics: U.S. online-game growth: visitors up 27%; minutes up 42%.

Good news for the online-games biz, specifically the casual (mass-market) online games biz.  Discussion after these highlights from today’s comScore report (Dec. 2008 data):

  • Free online-game-site visitors grew 27% in 2008, to 86m.
  • Aggregate playing time jumped 42%
  • Online games consumed 4.9% of total Internet time (up from 3.7% in Dec. ’07 )
  • Online display-ad views grew 29% to 8.6b (in Nov ’08 )
  • The average player views 127 ads (unchanged year-over-year)
  • Ads per page view (“a measure of ‘ad clutter’“) dropped 17%, to 0.83

The top sites? Make your guess, then check the tables in the press release from comScore, and let me know if you were close. Suffice to say, I’m impressed by WildTangent’s good work. (Regardless of November’s major changes there.)

Why the growth? ComScore says that people have “turned to outlets such as gaming to take their minds off the economy“.  Also, they are “turning to free alternatives.”  A 14% drop in retail sales for PC games is cited as evidence.

I don’t buy it.  As Dean Takahashi notes in VentureBeat, console games grew 19%.  And, even if free-online (casual) and paid-retail games both reach broader demographics than last year, they nonetheless reach different demographics: I don’t see them as clear-cut substitutes.  Maybe this is less a down-turn driven shift in spending habits, than a continuation of casual-game growth, fostered by innovation, and by wider use of social content-sharing. (“I stumbled-upon a great game!”)  By contrasted, we didn’t see an exceptional year for retail-game  innovation.

Footnote for the algebraically obsessive: Yes, 127 avg impressions against 8.6b ad views implies only 68m visitors in Nov ’08. That would be unique visitors to ad-supported sites, versus the 86m online-game-site visitors overall.

Sociolinguistics and the Botched Oath

Far too much has been said of the collaborative botch that President Obama and Chief Justice Roberts made of the Oath of Office.  But I do want to share some linguists’ observations, and note a connection to industry culture.  In this case: the culture of lawyers.

Linguists will happily note that Roberts’ misplacing of “faithfully” might reflect an instinctive grammatical superstition, one particularly favored by lawyers.  Specifically, he over-extended the bogus “avoid split infinitives” rule to blindly cover all “split verbs,” and thus he avoided uttering “will faithfully serve.”

Mark Liberman, co-founder of the excellent Language Log, has cited the highly-influential Texas Law Review Manual of Style as a leading perpetrator of the split-verb superstition, and thus a key player in “Grammatical indoctrination at law reviews“. (He later suggests that split-verb-phobia also infected journalism. As is typical of Language Log, the comments rival the posts: one comment posits the AP Style Guide as the infectious agent.)

And for a different cultural dimension (but, really, just for fun) I give you another commentator’s suggestion that split-verb-phobia is

“evidently a hangup of the heathen English, not of us purer Anglophones from North Britain:
Scots, wha hae wi’ Wallace bled
Scots, wham Bruce has aften led…”

The Texas Manual more recently backed away from its error.  (Yes, “error”!  Even Fowler and Follet encourage placing adverbs within compound verbs. And if Fowler says this is a bogus rule, then even a hard-line prescriptivist should agree it’s bogus.  Right, Mom?) But it has influenced thousands of lawyers, adding to other unique and distinct habits of speech and writing peculiar to lawyers and attorneys, such as those compound and redundant noun phrases. Law Professor Jim Lindgren writes:

This nonsensical rule against split verbs has caused entire volumes of law reviews to be filled with page after page in which adverbs have been squeezed out of their normal place. Most law professors who have dealt with law reviews recently seem either to have had disputes about the placement of adverbs or, worse, to have adopted the Texas approach, the approach of people who write as if English were a second language. It’s frightening to think that the ability of a generation of law professors to recognize their native language has been damaged by one silly book. Before picking up the Texas Manual in 1987, I had noticed that the ability of the law reviews to place adverbs correctly had deteriorated, but I hadn’t known the reason.

The best discussion I’ve found of the inaugural-oath event is in Benjamin Zimmer’s  recent posting in Language Log. Again, the posting is good; it’s the comments that are great. (As Zimmer noted in a follow-on.)

Other, unrelated,  learnings and observations from that thread:

  • Such vows and oaths are “deaconed off” for practical reasons.  (A new word, to me, if an archaic one. It’s apparently a late-19th-century Americanism, OED-cited and variously attested, stemming from the New England Congregational church practice.)
  • The oath is not performative (it doesn’t make-it-so: “I bet a dollar” would be performative), in the sense of causing the man to become President. He already was. But it is as performative  as any other oath or promise.  That is: the Hippocratic oath won’t make you a doctor, but I’m happier if my doc has sworn to it.
  • Weirdly, no generative syntactician or truth-functional semanticist has yet stepped into that discussion to argue that I will faithfully execute X is “the same sentence” as I will execute X faithfully.

Why a blog?

There’s plenty of intellectual pondering about “why blog?” My reason is a pragmatic one: I think I need a web presence, a place for http://www.scherlis.com to land.  My previous hand-tooled website — edited with pride in NOTEPAD, and with the snappy graphics of the CERN home page, circa 1992 — was painfully old, and old-looking.

Meanwhile, the blog — as a high-level structure for web content — has become not just a standard, but the default web presentation.  There are bad aspects to this, that I am likely to discuss in later postings. For now, I’m online, I look current-century, and I expect to populate this page with various of my presentations and papers, and with thoughts on interactive media, user experience, and on online-community.

I’m trying WordPress.com, because it seems to be the least-constrained tool in terms of insisting on the universal blog-format page.  In other words: you can create generic old-school web pages.  Hilariously, wordpress.com feels a need to tout and explain this as something new: “WordPress.com has a feature called ‘pages which allows you to easily create web pages.

More soon!