Category Archives: blogging

FTC workshop explores future of journalism, regulation of aggregation

Early on Tuesday morning, I walked up Massachusetts Avenue to attend the FTC workshop on the future of journalism. Ten hours later, I emerged overstimulated by the volume of ideas presented, saddened again by the tens of thousands of journalists who have lost employment and energized by the quality of the conversations I’d had.

If you look at Danny Sullivan’s impressive liveblog of the FTC workshop on the future of journalism, you’ll see why.

The event began with a video about the changing media world from Minnesota Public Media, embedded below. More about the “Future of News Summit, where it premiered, can be found at thefutureofnews.ning.com.

The FTC established a Twitter account for the conference, @FTCnews. You can see all of the participants’ tweets aggregated at #ftcnews.

Why convene the conference?

The very existence of the forum raised some eyebrows. How does the Federal Trade Commission factor into regulating journalism, after all? FTC Chairman Jon Leibowitz observed that:

..the ongoing revolution in the markets for news, then, warrants serious study for at least two reasons. First, markets for public goods such as news may work imperfectly. Competition policy is well-suited to evaluate these market imperfections.

Consumer protection policy is well-suited to help us understand the privacy and data security implications of the behavioral marketing used by media companies to increase ad revenues online. Second, and far more important, this is not just any market. The changes we are seeing in journalism will affect how we govern ourselves, not just the profits and losses of various news organizations.

The full remarks of FTC Chairman Jon Leibowitz on creative destruction and journalism in the Internet Age are available as a PDF. He was forthright in assessing that there was no reversing the Internet revolution. As he put it, “when Gutenberg printed the Bible, that was bad for those who illustrate by hand.” (Note: Jessica Clark argued that the FTC Should Consider Policy Reform to Support Public Media 2.0” over at PBS’s MediaShift blog.)

Leibowitz was followed by Paul Steiger of Propublica, an “independent, non-profit newsroom that produces investigative journalism in the public interest. “The answer is not to save newspapers,” said Steiger. “The goal should be to assure the continuation of journalism.”  He called Amanda R. Michel an “Internet genius” for her coverage of the 2008 campaign at the Huffington Post.  Steiger also cited Propublica’s investigative journalism on “hydraulic fracturing” in gas drilling as an example of the work the organization is doing.

Rick Edmonds of the Poynter Institute, author of the BizBlog there, presented results of 2009 State of the media study he co-authored. One of his final assessments was sobering: “Most surviving newspapers will be smaller, more expensive & targeted to older consumers.” As I replied to Andy Carvin , who wondered about the statement,  given the debt loads that Edmonds cited, waiting for hyperwired Echo Boom to “age into” newspapers might be a tough strategy.

The News from News Corporation

It’s a fair bet that the packed room in the morning was there in anticipation of remarks from Rupert Murdoch, founder, chairman and managing director of News Corporation, especially given that many of the media in attendance left to file stories once he was through.

Cecilia Kang covered his speech in the Washington Post in “Murdoch: Future of newspapers in online payment, feds should stand back.”  Jeff Bercovici offered a similar take in “Murdoch to Washington: Stay out of the way, but please help.”

The other 363 stories in Google News about Murdoch’s speech at the FTC attest to the interest in his remarks, too, including an excellent piece in the Wall Street Journal, “FTC to examine possible support of news organizations.”

Why? Murdoch may have held that “a diversity of views is essential to debate” but he was crystal clear:  news organizations that don’t adapt should fail, “just like a restaurant that makes meals that no one wants to eat.”

Further, even as he argued that the feds should keep the U.S. press the most free in the world, the government should also “use its power to make sure the most innovative companies can reach customers,” a view that sounded to this writer’s ears rather like net neutrality.

He was also crystal clear in a  conviction: “online ads can’t sustain good journalism.” Murdoch intends to extend the pay model of the WSJ and @BarronsOnline to the Times UK, perhaps as early as next January.

Murdoch also suggested that the FCC‘s cross-ownership rule that prevents single ownership of both TV and newspaper in local markets was “out-dated” in the Internet age, effectively suggesting that regulators both stay out of the news business and change the market conditions.  He also observed that “for newspapers, spectrum could well prove an economic vehicle,” pointing to online convergence and  the move to a readership increasingly consuming news through mobile computing devices.

What is the state of journalism?

Following Murdoch’s remarks, a state of journalism panel began with a focus on the financial health and accomplishments of newspapers and magazines. Martin Kaiser, senior vice president of the Milwaukee Journal Sentinel, spoke to journalism’s essential role: enforcing goverment accountability.

Bryan Monroe, a visiting professor at the Medill School of Journalism, brought attention to issues of  diversity. His remarks were published at the Huffington Post as “Why New Media Looks A Whole Lot Like Old Media.”

This panel also raised the first – and as it turned out, only – question to the audience, in the form of a poll: How many of you know someone under 30 who reads a newspaper in print? To my eye, about 40% of those present raised their hands. It’s perhaps worth observing that very few people present were in that demographic.

One hopeful model for reporting by assignment that was cited by David Westphal, Executive in Residence, Annenberg School for Communication & Journalism, University of Southern California, hailed from my former neck of the woods at the New England Center for Investigative Reporting.

Mark Contreras, chairman of the Executive Committee of the Newspaper Association of America & senior VP, newspapers at the E.W. Scripps Co., ended the panel with an analogy to the music industry, suggesting that there are “poignant similarities” to news business. He favors ASCAP/BMI models for content protection and a B2B model for revenue generation. Given the music industry’s struggle to adapt to the Internet, that approach might merit more consideration.

Defending the aggregators

After the representatives of legacy media shared their perspectives, Arianna Huffington came to the defense of aggregators and new media, like her own enterprise, the Huffington Post.

As she dryly observed, citing a post by Mike Masnick at TechDirt, there’s some news aggregation by News Corp/IGN out there. All Things Digital, for instance,  links and  aggregates content, as does Rotten Tomatoes.

Her remarks are posted in full there, as you might expect, in “Desperate Metaphors, Desperate Revenue Models, & the Desperate Need For Better Journalism.”

Huffington didn’t mince words in her denunciation, either:

It’s time for traditional media companies to stop whining and face the fact that far too many of them, lulled by a lack of competition and years of pretax profits of 20 percent or more, put cash flow above journalism and badly misread the web when it arrived on the scene. The focus was on consolidation, cost-cutting, and pleasing Wall Street — not modernization and pleasing their readers.

They were asleep at the wheel, missed the writing on the wall, let the train leave the station, let the ship sail — pick your metaphor — and quickly found themselves on the wrong side of the disruptive innovation the Internet and new media represent. And now they want to call timeout, ask for a do-over, start changing the rules, lobby the government to bail them out, and attack the new media for being… well, new. And different. And transformational. Suddenly it’s all about thievery and parasites and intestines.

Get real, you guys. The world has changed. Here are some facts culled from one of the most popular anthems to the impact of technology on our world:

(I talked with Huffington afterwards about her comments and the transition the industry is going through. She asked if I’d like to write about it – so I did, in “Is journalism going through a Reformation?“)

Where are the readers? Whither remedies? A Yahoo! News consortium? And how does Google News work?

After lunch, media analyst Ken Doctor shared his thoughts about “where the readers are.” You can read more from him at Contentbridges.com.

Leonard Downie presented findings on the reconstruction of journalism from the Columbia journalism report:  Columbiajournalismreport.org

Lem Lloyd, vice president of channel sales at Yahoo!, shared details of the media company’s growing newspaper consortium. Lloyd said they’ve have sold 18,000 campaigns on Yahoo!, amounting to more than six  billion impressions. Behavioral targeting sales represent some ninety percent of that total.

And Josh Cohen explaining how GoogleNews works with publishers. Cohen wrote about the FTC and the future of news at Google’s Public Policy blog. A extensive interview of Cohen on paywalls, publishers and partnerships by Danny Sullivan is also available at Search Engine Land.

Emerging models for journalism

The most dynamic panel of the day featured technologists, entrepreneurs and an bonafide bloggers like Josh Micah Marsall of Talkingpointsmemo.com and Danny Sullivan, who took a break from liveblogging to participate.

Sullivan, at that point, was frustrated with offline metaphors applied online. And Jeff Jarvis, media pundit and CUNY professor, asked the FTC to “stay off the lawn,” suggesting that premise of the event was about the survival of legacy players, not journalism itself.

In considering the prospect of not finding a viable models, Robert Thomson, managing editor of the Wall Street Journal, observed that “the cost to society of not being able to afford specialist journalism is going to be profound.”

Chris Ahern, of Reuters, and Danny Sullivan replied to Thomson that it’s not “an either/or proposition.” Hybrid models for news are worth trying.

And in a memorable exchange, Marshall observed that “there is more of an ethic online of linking to the story that got the reporter on the track and then adding commentary” than is practiced by traditional media, alluding to stories that the AP and others have run with without linking.

Media consumption trends, the economics of news and online advertising models

For more on the final panel and preceding presentations, consult Danny Sullivan’s liveblog of the FTC workshop on the future of journalism.

Ball State professor Mike Bloxham presented on media consumption based upon data that can be found at ResearchExcellence.com. He described a need for publisher to look cross-platform for media consumption in order to meaninfully gauge a “news footprint” that included print, TV, online and radio.

Susan Athey, a professor of economics at Harvard, presented on the economics of news, particular the trend toward “multi-homing” in consumption and the growth of online advertising. Her presentation addressed the salience of potential FTC regulation more directly than any other, aside from the chairman himself, predicting competition in online ad networks and between aggregators that would require oversight.

Law professor David Evans, following Athey, said that “the one thing I do worry about is mixing up market failure with nostalgia.” His paper on  economics of the online advertising industry is available online.

The final panel of the day, addressed the important of behavioral advertising to future business models. Jeff Chester of Democraticmedia.org asserted that “the news media industry should embrace fair information principles.”

Conclusions?

As I look back over the day, it’s not clear to me yet what the FTC intends to do, other than listen. I look forward to returning to the FTC tomorrow to learn more.

3 Comments

Filed under blogging, journalism, research, technology, video

Is journalism going through its own Reformation?

Is Catholicism to the old media view of journalism as Protestantism is to new media?

To be clear, this isn’t an exact metaphor, nor does it in any way reflect my opinions of any branch of organized religion (or those of my employer). It’s a tricky metaphor, to be sure, and one I’m on dangerous ground to entertain.

The notion was suggested by a conversation I had with Steve Waldman and Arianna Huffington at the FTC’s workshop on the Future of Journalism today in DC, where she gave a fiery (and funny) defense of new media, aggregation, and the shift to a more social news experience online. (Talking with Waldman about the future of journalism had its own resonance, given his new role as a media advisor at the FCC.)

As Huffington put it, “with so many traditional media companies adapting to the new realities, it was ridiculous to engage in an us vs. them, old media vs. new media argument.”  Her remarks followed those of Rupert Murdoch, whose speech was ably chronicled by Danny Sullivan in his liveblog of the FTC’s journalism workshop.

Students of history recall that Martin Luther broke away from the Catholic Church after his efforts to reform it were balked. That break resulted in a schism that roiled Europe for centuries. The metaphor may be apt here because of the way that readers are consuming the news has become more social. Protestants, particularly Quakers and Unitarians, embrace a personal relationship with a higher power that is not mediated by a priest. Historically, Catholics have relied upon priests to guide worship and intermediate their relationship with the deity. In medieval times, priests read the Latin in the Bible that illiterate parishioners could not.

The traditional “high priests of journalism” — newspaper and magazine editors — controlled what was covered, which letters to the editor were published, and where and when stories were run.

No more, or at least not in online news.  Increasingly, readers have formed a more direct relationships with the sources.  Even if the editors at the New York Times or Wall Street Journal choose not to cover a story, if someone uploads video from, say, an ACORN office, a campaign event in Virginia or a street in Tehran, the world can view it, share it and discuss it.

The very existence of this post is evidence of how the tools for production of news have been democratized. I’ve written about the trend before, when I blogged about NPR and PBS’s unconference or asked how, if we are the media, it changes society. And folks like Jeff Jarvis and Dan Gillmor have been writing about the transition to grassroots journalism for years.

It’s not at all clear yet what the The Ninety-Five Theses of this “Reformation” in journalism will be. Online journalism ethics still pertain, and viable means of subsidy for production of that journalism will have to be tested and refined.

Today’s FTC hearings contrasted disparate views from Rupert Murdoch and Arianna Huffington, a contrast in both belief and model, along with a panel of newspaper executives who frequently focused on newspapers’ past, not journalism’s future.  “The Reconstruction of American Journalism,” authored by Len Downie and Michael Schudson, points to business models and challenges alike.

Huffington’s speech, published simultaneously at the Huffington Post as “Desperate Metaphors, Desperate Revenue Models, And The Desperate Need For Better Journalism,” led off with her belief that:

…the future of journalism will be a hybrid future where traditional media players embrace the ways of new media (including transparency, interactivity, and immediacy) and new media companies adopt the best practices of old media (including fairness, accuracy, and high-impact investigative journalism).

Perhaps 2010 will be the year that a reformation of journalism will find a workable medium between leaner, traditional news media, the growing flock of distributed contributors, and the higher power – civic good – that its audience expects and needs.

1 Comment

Filed under blogging, journalism, technology

When “we are the media,” how does it change us or society?

The changes that smartphones with camera and an Internet connection are wreaking in society have been both thoughtfully reported upon, relentlessly evangelized and ruthlessly derided, depending upon the angle or intent of the commentator.

The past days will occupy a few lines in the history books. Last night, the U.S. House of Representatives passed a milestone healthcare bill. And earlier in the week, a soldier killed fellow servicemen and women at Fort Hood.

Today, Paul Carr wrote that “citizen journalists can’t handle the truth at TechCrunch.

I agreed with him on a few things. The video from “This American Life” (below) that Carr embedded was deeply affecting on this point, in terms of what becoming an observer can do to our involvement in what we are filming.

Changing an avatar to green or changing a location to Tehran did not, despite good intentions, prove to substantively help students escape repression. I gather from reading accounts from journalists that the solidarity demonstrated by doing so was both noticed and appreciated there. And there was a tipping point in terms of the use of the platform to bring attention to a political cause.

Where I was left frustrated is in Carr’s suggestion that those who are watching should be doing something more, whether in the hospital or, in the case of Neda, on the streets of Tehran, instead of documenting events with the digital tools at hand.

Mathew Ingram posted a thoughtful response about this notion on his blog, “Citizen Journalism: I’ll take it, flaws and all.” David Quigg wrote   a thoughtful reply to Carr’s post as well. Dave Winer was less charitable.

I found the example of Neda to be unworthy of the point I think Carr was trying to make.

It also brushed off two key factors: the effect that the release of that video had in revealing the death of a protester and that of the bullet’s impact itself on her heart.

As Suw Charman-Anderson pointed out in her detailed critique and debunking of Carr’s post, “Killing Strawmen,” (which I won’t repeat here), there was a doctor on-site, who was unable to do anything because of the massive trauma to her chest.

In my limited experience, you provide the standard of care to which you are certified and are able to deliver, ceding primary responsibility to others more able as they arrive on scene. As an EMT couldn’t do much more, for instance, than to gauge consciousness, stanch bleeding, stabilize injuries, provide oxygen and transport people. Your choices must change if someone is in the wilderness but in most scenarios, that’s accurate. Paramedics, nurses, doctors and surgeons each have progressively more expertise and responsibility.

In all of that, communication with the nearest hospital and ER docs available is crucial. Transferring information to both medical professionals and law enforcement is something a bystander can and should do.

And to some extent, communication and documentation is precisely what a member of the public equipped with a cameraphone can contribute, despite the vigor with which Carr has chosen to deride that role.

I don’t doubt that seasoned correspondents, armed with an understanding of the ethics and laws that pertain to reporting, are needed to convey information from the battlefield or to analyze the meaning of the trends that confront us.

In fact, Brock Meeks, one such trusted newsman, made a comment on my post about Twitter lists that emphasized just how important getting the facts right is to both the audience and media.

I was left wondering about other situations where the “citizen journalists” Carr derides are providing an important function in the newsgathering ecosystem, whether in reporting national disasters, disease, voting irregularities or consumer sentiment.

A more calm approach might consider whether models of “hyperlocal” journalism that marry traditional media to online platforms might have a chance of success.

My intention is not to suggest that observers couldn’t play a useful role in a crisis. It was to say that when there are qualified staff on scene, documenting what is happening in the absence of mainstream journalists may be useful for those that follow – including news outlets that may use video or audio gleaned on site.

I agree with Paul that running images shouldn’t occur without a full understanding of the ethics or privacy rights involved.

Unfortunately, many tabloids have shown a poor grasp of either historically.

The fact that technology changes behavior doesn’t make it inherently bad. We’re all struggling to make sense of exactly what living in a modern panopticon created by one another will mean. It changes news, our conception of privacy, and even our perception of self.

The traits for good character and decency that the Greeks described millennia ago remain applicable, however, just as the ethics taught in journalism schools pertain to modern reporters armed with Flip cams, iPhones and a direct line to YouTube.

There will continue to be moments when war correspondents are confronted what choices about how covering conflict, versus participating in it, will mean.

Similarly, people driving by an accident will need to be thoughtful about “playing paparrazzi” as opposed to making sure that those involved are receiving the aid they need. Anyone who has a conflict about whether to “tweet or treat” might to do well to consider what basic human decency means to them, personally.

Does an event need to be documented? Or does calling 911 and then moving to help trump rendering assistance?

Citizens are looking for truth, honesty and facts, where ever we can find them.  That need was frequently the subject of discussion during Public Media Camp, after which I wrote that “2009 is the year of We, the Media.”

Perhaps, as news organizations and citizens alike contribute to the body of knowledge online, a new model for collaborative journalism will emerge that serves each better.

Reblog this post [with Zemanta]

4 Comments

Filed under blogging, journalism, social media, technology, Twitter, video

Twitter Lists: We are informed by those we follow. We are defined by those who follow us.

“The power of Twitter is in the people you follow.”-@nytimes

You’ll find that quote at NYTimes.com/Twitter, where the New York Times has built a page of Twitter lists curated by its editors, its writers and, presumably, the help of its considerable audience.

As this feature has rolled out, I’ve read knee jerk criticism, thoughtful analysis, wild evangelizing and observed “lists of lists” be collected as sites like Listorious and Listatlas.com spring up to rank them.

Tech pundits and, rapidly, news organizations have all created lists that offer apply new taxonomies, imposed human-defined categories onto the roiling real-time tweetstream.

Readers are defined and informed by the diversity of the information sources that they consume. In a user-created Web, we are defined by those who choose to follow us, including any lists or tags that they associate with  our names.

It’s been exciting to watch. And if you’re a reader of David Weinberger, author of “Everything is Miscellaneous,” you might recognize this emergent behavior as a familiar phenomenon. Twitter users are using lists to organize one another into understandable taxonomies. Folksonomies, to use the term coined by Thomas Vander Wal.

Users have some control over which Twitter lists they appear upon. If you block a user, for instance, you can remove yourself from that user’s lists, if for some reason you don’t want to appear on it.

What we can’t control, once we make ourselves public there or elsewhere on the Web, is how others tag or list us.

This goes back to what Weinberger (along with Doc Searls, Rick Levine and Christopher Locke) wrote about in “The Cluetrain Manifesto” ten years ago. “Markets are conversations.”

I suspect that in the weeks ahead, both companies and individuals may find themselves on lists that they perhaps would not wish to define as part of their brand identities.

“I would not join any club that would have someone like me for a member”

As I quote Groucho Marx, today, I feel fortunate, for two different reasons.

First, to date, I’ve been included on 176 lists, none of which I’m embarrassed or insulted to be on. You can see all of them at “memberships,” which is a friendly way of describing inclusion.

Thank you. I’m humbled.

Second, most of the lists are being used by an individual user to categorize others for providing particular sort of information.

Overall, I’m most closely associated with technology, journalism, security and media. That’s  a good sign, given my profession! I was glad to see that the account I maintain at work (@ITcompliance) has been added to 33 lists, primarily compliance, information security, cybersecurity and GRC.

I’m talking about the right things in the right places.

Certain lists, however, have meant that many more people reading me than would have otherwise because of the hundreds or thousands of people that have chosen to follow them, due to the influence of their creators.  I’m thinking about lists like these, some of which have gone on to become popular at Listorious.com.

@palafo/linkers

@palafo/newmedia

@kitson/thought-leaders

@jayrosen_nyu/best-mindcasters-i-know

@Scobleizer/tech-pundits

@Scobleizer/my-favstar-fm-list

Thank you, fellas.

Like any other tools, lists will no doubt be used for good and ill. An outstanding article by Megan Garber, “Fort Hood: A First Test for Twitter Lists” in the Columbia Journalism Review, shows how news organizations can leverage the feature to curate the real-time Web for the online audience.

The lists—which offer a running stream of information, updates, and commentary from the aggregated feeds—represent a vast improvement over the previous means of following breaking news in real time. In place of free-for-all Twitter hashtags—which, valuable as they are in creating an unfiltered channel for communication, are often cluttered with ephemera, re-tweets, and other noise—they give us editorial order. And in place of dubious sources—users who may or may not be who they say they are, and who may or may not be worthy of our trust—the lists instead return to one of the foundational aspects of traditional newsgathering: reliable sources. Lists locate authority in a Twitter feed’s identity—in, as it were, its brand: while authority in hashtagged coverage derives, largely but not entirely, from the twin factors of volume and noise—who tweets the most, who tweets the loudest—authority in list-ed coverage derives from a tweeter’s prior record. Making lists trustworthy in a way that hashtagged coverage simply is not.

Garber goes further in exploring what role lists may play in journalism’s future, as organizations collaborate with both their audience and one another in curating user-generated content. It’s a great piece. Pete Cashmore, of @mashable, has written more about this at CNN in “Twitter lists and real-time journalism.”

Individuals and news organizations alike can create lists as needed. For instance, as the House debates a historic health care bill here in Washington, you can follow the discussion at @Mlsif/healthdebatelive

As Cashmore points out, in the social, “people-centric Web,” we use our friends as a filter. As Paul Gillin observed,  everything that you’ve learned about SEO may be useless in a more social Web. Google’s new Social Search shows how, if we choose, our search results can be populated with content from our circle of friends.

On Twitter, we can now use the lists from trusted friends and news organizations to curate the real-time Web. That makes them useful, immediately.

And after a week full of public grief here in the U.S., that’s good news.

21 Comments

Filed under blogging, journalism, social bookmarking, social media, technology, Twitter

At the NPR and PBS unconference, 2009 is the year of “We, the media”

John Boland at Pubcamp

John Boland at Pubcamp

“TV, radio and pro journalism still matter in this new ecosystem”-John Boland, PBS.

This past weekend, I attended Public Media Camp, an unconference at American University in Washington, D.C.

I came away from the two days of sessions, talks, informal discussions, random encounters and rapid-fire information exchange inspired, exhilarated and a bit exhausted. That last is why it took a day to get a post up. By its nature, I couldn’t go to everything. What I did attend, I tried to take notes upon and livestream to Livestream.com and uStream. When it comes to the archiving that video, unfortunately, I endured two crashes and suffered from the lack of a decent mic. Happily, much better video will be coming online from other sources over the next week. What follows are my thoughts, links and video from “Pubcamp.”

Citizen Journalism and public media

The first session of the day remains one of the most memorable. Citizen journalists and local bloggers have much to learn from – and about – one another. “We the media” is a theme I pick up later in this post. Suffice it to say that democratization of the tools for information sharing has taken some producers unaware and left many stations understaffed, at least at the level it takes to effectively engage with those in the community creating the content. That said, many NPR editors and writers are doing quietly effective work in finding, engaging and collaborating with bloggers in the community. I mentioned Universal Hub in Boston, although I’ll leave it to Adam Gaffin, Radio Boston and WBUR to relate exactly how well that relationship works.

@jessieX referenced the tensions in this session in her post on generational differences, “My Takeaway,” where she captures the insight she shared with me in person.

Video of the  citizen journalism session is available on-demand.

Tools for curation of audience-generated content

This was one of the best attended sessions of Public Media Camp and, due to any number of reasons, one of the best, at least in my view. The standing room-only group was organized into as a circle and shared dozens of useful tools and services that can aid stations and editors in aggregating, organizing, filtering and curating pictures, video and text generated from listeners.”We all want to open up the floodgates to UGC and crowdsourcing but there’s issues of trust,” said Andrew Kuklewicz.

My favorite metaphor from Public Media came from Andy Carvin here, in the idea of “trust clouds,” or the social network of people around us that represent who we can believe, retweet, link or otherwise invest with our own reputation. A tool for doing just that if at Trustmap.org. Newstrust.net also came up as “a guide to good journalism.”

Such tools and relationships are critical to both the use of user generated content by stations and the decision of readers and listeners to trust and, in the social media world, pass on information. As I commented during the session, increasingly consumers of media follow bylines, not masthead. To borrow David Weinberger’s phrase, “transparency is the new objectivity.” By showing readers how and where the audience was sourced in real-time, media organizations can make a stronger case for the veracity of such information.

Tools included:

Greg Linch shared the approach to curation that Publish2 takes: “Social Journalism: Curate the Real-Time Web.”

Social Media Success

The most obvious case study in social media success may be Andy Carvin himself. The impact of his efforts have been deep and far-reaching throughout NPR’s shows and staffers. As Amy Woo put it, “I feel the same way about Andy and his tweeting as I do about Diane Rehm.”

Carvin offered compelling examples of success, like an NPR partnership with content discovery service Stumbleupon to create a reciprocal connection w/Twitter. With a little tweaking, a retweet can equal a stumble.

Another site, criticalexposure.org, “teaches kids to take pics as a way to be advocates for social change,” said Carvin.

He also said that NPR’s Facebook fan page generates some 8% of NPR web traffic. Their testing shows 1 post every 60-90 minutes is ideal for audience. That connection came courtesy of a listener, at least at the outset: The NPR fan page on Facebook was created by a fan. That fan then gave it back to the organization, says Jon Foreman. Carvin’s curation of public radio content took it to the next level.

Hurricanewiki is likely to be cited as a classic case in social media success, where more than five hundred people came together, organized through Twitter by @acarvin. You can see the results  at Hurricanewiki.org. Carvin also created a hurricane resources community for Gustav on Ning, built in about 48 hours.

One example that came up in multiple sessions is NPR’s Vote Report . Jessica Clark and Nina Keim wrote a report on it: “Building #SocialMedia Infrastructure to Engage Publics.” And while Carvin pointed out where Vote Report fell short, the idea behind enabling listeners to “help NPR identify voting problems” holds some promise. The use of social media for election monitoring is spreading globally now, as can be seen in Votereport.in in India.

The was a different issue with InaugurationReport:- volume. Carvin said that there was simply “too much social media content to effectively curate.” By way of contrast, even a few hundred engaged listeners could effectively use the #factcheck hashtag by http://npr.org/blogs/politics to fact check the U.S. presidential debates in real-time.

Greg Linch shared a collection of social media guidelines curated at Publish2, including NPR’s social media guidelines. There’s a careful eye keeping watch here on the ethics that go with the new territory: the @NPR ombudsman was present (she’s @ombudsman on Twitter) and brought attention to how the public will relate to any perceived bias shown on social media platform.

A standard for conduct matters. It’s not all peaches and cream, after all, given the ugliness that online discourse descend into on many occasions. “Posting on our site is a privilege, not a right,” said Carvin regarding the scrum on comment trolls, online spammers & NPR sites.

Video of the social media success session is available online at uStream.com.

Public Media and Gaming

One of the more entertaining and creative sessions at Public Media Camp was the hour on gaming. Educational gaming can raise literacy rates in children, after all – could NPR deliver further by reaching into this interactive medium? Nina Wall (@missmodular) said, in fact, that PBS Kids will soon have available an API similar to NPR’s for educational games.

An excellent summary of this discussion can be found at AmericanObserver.net. Video of the public media and gaming session is available online at uStream.com.

PictureTheImpossible is one intriguing example of the genre. The online, community-based game jointly developed by RIT & the Rochester Democrat & Chronicle.

The discussion also included  Kongregate and their “social gaming” model, which provides a platform & revenue share for developers. Could NPR follow suit?

Or what if NPR created a fantasy league for news? Points could be accrued for newsgathering, with players trading shows or writers.

It’s been done for politics – check out the case study of an @NPR fantasy league, from Julia Schrenkler: Minnesota Public Radio’s “fantasy legislature.”

My favorite suggestion, however, came from Andy Carvin: a social “Wait, Wait, don’t tell me!” game where the audience can create news quizzes and then challenge one another on Facebook or the Web.

Social Media FAIL

The first FAIL from Andy Carvin? When the hype around crowdsourcing with Amazon’s Mechanical Turk didn’t deliver. Here’s the Wired story on questions about crowdsourcing.

Video of the social media FAIL session is available on-demand. Amy Woo and other attendees offered many more examples of failures.

Apps for Public Media

The last session of Pubcamp kicked off with a description of @AppsForDemocracy by Peter Corbett. Interesting examples about:

ParkItDC helps people find parking in DC, including which meters are broken.

AreYouSafeDC shows potential threats.

StumbleSafely is a guide to bars & avoiding crime in DC.

FixMyCityDC is a web-based application that allows users to submit service requests by problem type.

And the winner, DC311, enables iPhone access (download from iTunes) to the District’s 311 city service site, coupled with a  Facebook App.

There’s more to come: In 2 years, the vision laid out by Corbett  includes “muni data standardization, open civic app ecology and the ‘real-time muni web.’ And in 5 years, the vision for includes ideas seemingly lifted out of science fiction: augmented civic reality, AI-driven civic optimization & “virtual flow working.”

What could be created for public media? Apps that enable listeners to create channels from the API for specific topics. Apps that combine real-time data feeds from government sources with local bloggers and radio stations. Apps that allow listeners to help filter the flood of information around events, like the Vote Report project.

Why develop such apps? Andy Carvin believes that  “the line between content, services & apps is blurring. To create a more informed public, it now takes more.” To not create such innovation would, in effect, be irresponsible.

More posts, eclectica and public media resources

The PBS News Hour has partnered with the Christian Science Monitor on “Patchwork Nation.”

The work of Doc Searls at the Berkman Center on “vendor relationship management” came up, mentioned by one Keith Hopper. More details at http://projectvrm.org.

FrontlineSMS.com is a free group text messaging tool for nonprofit that is useful in disaster and crisis response.

Swiftapp.org was shared by @kookster: free, #opensource toolset for crowdsourced situational awareness.

Plenty of social media application develop is going on at PBS. Their social media guru, Jonathan Coffman,  pointed to the tools at PBS.org/engage.

The Participatory Culture Foundation has launched Videowtf.com.

Economystory.org is a cooperative effort of public media producers to provide financial literacy.

Check out Radio Drupal and Radioengage.com for open source public netcasting information.

Session notes for @PublicMediaCamp are going up at the wiki at PublicMediaCamp.org and are being aggregated under #pubcamp on Delicious.com by Peter Corbett.

My Takeaways

There a lot of smart, savvy, funny geeks in public media, passionate about delivering on the core mission of education, media literacy and good  journalism.

This same cadre is pushing innovative boundaries, whether it’s engaging the audience, creating new technology platform or expanding the horizons of computer assisted reporting. Database journalism is alive and well at NPR – just look at this visualization of the U.S. power grid.

Vivian Schiller said during her keynote that “2009 was the year everything changed.” Out of context, that statement drew raised eyebrows online. In person, there was more clarity. The massive disruption to the newspaper and traditional media industry is now resulting in significant layoffs and a seachange in how people experience events, share information and learn about the issues. Despite the issues presented by ingesting a torrent of new sources of information, the concept of “We the media” has deep roots, given that so many more people now have the ability to contribute news and help analyze it now that the tools for communication have been democratized and often made freely available online.

What’s missing in that fluid mix of updates, streams and comments is trust in veracity. As we all move into the next decade of the new millennium, the central challenge of public media may be making sense of the noise, taking much the same approach that it has in the past century: report on what’s happening, where it happened, who did it and why it’s important, with a bit more assistance from the audience. Given the loyalty of tens of millions of listeners, “we the media” might just have some legs.

Mark all as read
Refresh
Feed settings…

digiphile: Next up from @acarvin’s presentation of #socialmedia successes: @VoteReport: “Help NPR Identify Voting Problems” http://j.mp/1fysxf #pubcamp

digiphile: Next up from @acarvin’s presentation of #socialmedia successes: @VoteReport: “Help NPR Identify Voting Problems” http://j.mp/1fysxf #pubcamp
Add starShareShare with noteKeep unreadSend to
Reblog this post [with Zemanta]

7 Comments

Filed under blogging, journalism, social bookmarking, social media, technology, Twitter

Supreme Court preview for the 2009-2010 session from ACS

The Supreme Court of the United States

The Supreme Court of the United States

What cases coming before the Supreme Court will be the “most interesting and have the most impact” on the American people? That’s a matter of considerable interest before the Justices hold their opening conference next Monday.

According to the lawyers assembled for the American Constitution Society for Law and Policy yesterday, the court will consider the constitutionality of life sentences for juveniles, free speech, campaign finance and corporations, revisit  elements of Miranda rights, and hear a case on antitrust on the NFL.

Quite a docket.

The Supreme Court will also, in what Michael Carvin called the “most important separation of powers case in 20 years,” take a hard look at the Public Accounting Oversight Board (PCAOB).  Is the PCAOB, established by the Sarbanes-Oxley Act (SOX),  a “5th branch” of government?

Michael Carvin was just one of an estimable collection of legal minds on the panel, moderated by one Thomas C. Goldstein. Goldstein, along with arguing some 21 cases before the Court, is principally responsible for SCOTUSblog. Goldstein and Carvin were joined by Pamela Harris, executive director of the Supreme Court Institute at the Georgetown Law Center, Doug Kendall, founder and president of the Constitutional Accountability Center, Lisa Kung, director of the Southern Center for Human Rights, Deanne Maynard, partner at Morrison & Foerster and Paul Smith, partner at Jenner & Block.

In the spirit of legally-inspired disclaimers on accuracy of interpretation, I should note before I go any further that, while I have a parent who is a lawyer and am romantically involved with a law professor, I have no formal legal training and came to panel as a journalist and observer.

Campaign Finance

Up to until now, says Doug Kendall, the rule has been to limit corporate participation. The court upheld election communications rule before hand within 30 days. The case before the Court is one that has attracted gallons of media ink, due in no small part to the involvement of a well-known citizen: Secretary of State Hillary Clinton. Consideration of campaign finance is already underway and received additional attention given that it was the first that newly-sworn in Justice Sotomayor has heard.

Doug Kendall brought up the 1990 precedent of “Austin v Michigan” as way of exploring the issue of free speech by a corporation vs speech by an individual. The argument that he put forward goes back to the language of the U.S. Constitution, which refers to persons and people – but not businesses. Kendall argued that the distinction is consistent with first principles. If Austin is overruled, he said, it would unleash corporate campaign expenditures.

Mike Carvin, in a rebuttal that evidenced his considerable experience in courtroom oratory, brought up the example of corporate media outlets like MSNBC or the Washington  Post endorsing a candidate. He questioned whether that would be any different than corporation buying advertising space endorsing a candidate. “Doesn’t think eliminating core political speech rights is consistent,” he said.

Carvin asserted that  26 states don’t regulate speech in this way and don’t operate any differently.  “It would be one thing if we were eviscerating rights for free speech,” said Carvin. “Are we doing this in the name of preserving McCain-Feingold?” He strongly suggested that free speech rights should not “be sacrificed on such slim evidence.”

Kendall observed that the “First Amendment also includes something about the freedom of the press” –  different than, say, Exxon Mobil. “This case raises fundamental questions about what at its core our constitutional protects,” he said, positing the analogy of  “We the people vs We the corporations.”

After that exchange, the substantive issues of whether video depictions of animal cruelty are protected under the First Amendment or national monuments on private land felt positively quotidian, despite the rigorous analysis of the precedents and relevance of the matters.

Miranda,  separation of powers and revisiting federalism

Harris explained that two different cases will be relevant to revisiting Miranda, one of which will address whether a citizen has the right to counsel during questioning. As she pointed out, these cases are “the first real cuts” for Roberts and Alito at the issue.

Another case will visit the question of whether you “deprive employer of honest services” by using business equipment – like, say a computer – on the job for personal or family business. That’s a serious question, given  both the open language and vagueness of the law in question and the way it could impact millions of people who conduct personal business online daily.

Harris also indicated that the case raised questions regarding the separation of powers – classic federalism issues.

Another case, Melendez v Diaz, will focus upon the 6th Amendment, involving the Confrontation Clause. At issue is whether  lab reports represent testimoniasl, which goes to the question of their introduction in trial. Is it enough for a defendant to call the analyst as his own witness? Or does the state need to do so? It “seems like the question is answered,” said Harris.  “What’s different?” The answer is practical: a new Justice. The practical concerns of bringing in analysts each time lab results are presented are significant – doing so would slow process. Given her self-identification as a legal pragmatist, will Sotomayor be more receptive than Souter was?  Harris doesn’t think so.

Separation of powers is also at issue with regards to the Public Accounting Oversight Board (PCAOB), as referenced above. The PCAOB, said Carvin, is “outside of government and presidential control” – that’s a  separation of powers issue.  The defense of agency is “unprecedented in American history,” he said.  The President can appoint or remove chairmen from institutions within the so-called “4th branch,” like the  SEC, FCC or Federal Reserve. In Carvin’s view, PCAOB is a 5th branch,”  with the SEC holding limited ability to influence regulations coming from it.

Criminal matters

According to Lisa Kung, Troy, Alabama has the highest number of capital convictions in the state. The case  of Hollywood v. Allen has raised issues around a  “cut & paste” judicial process at play there, where decisions are showing up with typos from drafts, like “proposed” making it through or misspellings of judges’ names. The question of the case? “What kind of deference does a federal court pay to this kind of judicial…nonsense,” said Kung, focusing less on the minutiae of mistakes and more on the quality of decisions.

Kung also discussed the case of Sullivan vs Ford, where the Supreme Court will decide on the issue of juvenile life without parole. The young man in question was sentenced at 13 years old to life in prison with no chance of parole. “Will it extend Roper?” Kung asked. That care is relevant  to the application of the death penalty under 18. Will Kennedy’s reasoning apply?

Kung brought up a case in which prosecutors were caught acting badly in Iowa by fabricating evidence. The relevant question is how much immunity should the law give to a prosecutor?

Business Docket

A case involved the NFL and antitrust law is coming up, specifically the use of the NFL’s intellectual property by others. The decision and  reasoning behind it could apply to any sort of joint venture down the road.

There’s also a patent case, examining what represents an eligible process. At issue in the Bilsky case is a business method, specifically a theory of hedge fund risk management.

Merck is also on the docket. That’s “part of trend where court taking cases cutting back on plaintiff’s bar,” said Deanne Maynard. “At what point does the plaintiff know enough that it should file?” In this case, the issue is over the troubled pain reliever, Vioxx.

Finally, there’s an issue over property, a case of “classic takings mode,” says Carvin. In Florida, if you own a beach house, the law says that you own the sand down to the high water mark. Like many coastal communities, Florida’s beach homes have been losing land due to erosion. Local governments have tried shoreline replenishment on the beaches in the state, which added 75 feet.  That’s the crux of the issue; the state then asserted that land is public. Home owners disagree.

Carvin, who argued  Bush v. Gore in front of the Florida Supreme Court, pointed out that this case may be memorable, in terms of how that court might change the law.  In essence, he said the court seems to have changed property rights by reinterpretation.

Parting thoughts: SupremeCourt.gov and finding information on cases

I was lucky to hear this preview of the cases coming up. I’m hopeful that the ACS will be releasing video of the session to the public. My observation after some searching online is that, despite SCOTUSblog and other watchers, resources that enable citizens to easily find out what cases are being heard aren’t easy to come by. The court’s website, SupremeCourtus.gov, provides information on recent decisions but the docket page is out of date and relies on the visitor knowing case numbers. Hearing lists are blank. The calendar page is a PDF that doesn’t indicate when individual cases are being heard.

I’m far from the first person to feel some angst over this issue. According to Fast Company, the court’s staffers know the site can use a redesign. The Sunlight Foundation’s Daniel Schuman confirms that in a post on redesigning the Supreme Court: “The Justices appear to agree. They’ve recently asked Congress for money to move control of the site in-house, taking over responsibility from the GPO.”

You can see the Sunlight Foundation’s mockup of what such a redesign might look like, below. The Foundation’s other suggestions, if implemented, would go a long way to making the Court’s cases, decisions and operations more transparent to the American people. I hope they are taken up, along with the long list of cases above.

The Sunlight Foundation's mockup of a new Supreme Court website

The Sunlight Foundation's mockup of a new Supreme Court website

Reblog this post [with Zemanta]

1 Comment

Filed under article, blogging, journalism

Cloud computing and DC, OpenID, privacy, cybersecurity, 3121, CongressCamp, Gov20 and the US CIO

Fall came and with it a torrent of news and events. I’m still sifting through news, ideas and encounters from the Gov 2.0 Summit last week. I’m still smiling after meeting Clay Shirky, Craig Newmark and Vint Cerf. The “father of the Internet,” below was  a kind, gentlemanly presence at Google’s offices after the Gov 2.0 Expo.

Vint Cerf at Google

Vint Cerf at Google

Following up on Gov 2.0, I wrote about how D.C.’s CTO found both compliance cost savings benefits to cloud computing and reported on the OpenID federated identity framework set for .gov authentication pilot.

In a snarky moment, I caught the Twitter fail whale surfacing during a discussion on cloud computing.

fail-whale-cloud-computing-gov20
Ironic animal.

I recorded a half hour of video with Chris Messina and David Recordon discussing OpenID authentication and .gov websites.

I wrote a short piece that sized up U.S. CIO Vivek Kundra on Data.gov, OpenID and government transparency.

I blogged about how U.S. CTO Chopra focused on transparency and outcomes at Gov 2.0.

After I made it through that writing, I summarized new research from the IAPP that showed privacy policy success lies in collaboration with IT and synthesized the expectations of Center for Democracy and Technology analysts regarding federal technology policy here Washington.

And I managed to get a post up about how 3121 brings social networking and security challenges to Capitol Hill that included an interview with the CTO responsible for getting this new professional network for Congressional staffers working properly.

At the beginning of the week, I also wrote three posts on Congress Camp, including:

I visited the FCC for the first time, where I watched the panels on broadband and healthcare.

And on one pleasant fall night, I also visited the National Press Club, where the DC Social Media Club hosted a panel that discussed  how mainstream media is using social media tools.

I think I like living in the District.

I know this is a lot of “I” but hey, this is my blog. Thanks for visiting!

I can’t wait for the weekend! BBQs with friends and family, bike rides, plenty of time outdoors.

Reblog this post [with Zemanta]

Leave a comment

Filed under article, blogging, journalism, technology

MSM using social media tools at the National Press Club

I went to the Washington, D.C. Social Media Club‘s fall kickoff meeting tonight, which featured a terrific panel on Mainstream Media Using Social Media Tools. The moderator,  Jeff Mascott of Adfero, facilitated an excellent discussion with three journalists from traditional print publications:

I livestreamed the event through the digiphile channel at livestream.com. I couldn’t get the video from livestream to embed below correctly, so you’ll need to watch the session on demand at livestream.com. I wish I’d had a better mic and found a seat in the middle for a closer view. That said, the Social Media Club recorded a high quality version of the panel that will be available soon, so you won’t have to rely on my artifacted stream and low sound levels. Nota Bene: forward ahead to 6:30 or so, when the panel actually begins!

My insights for the night?

Challenges for the @Washingtonian include retaining a traditional editorial “voice” online and yet adding some  irreverance and snark on social media platforms. Apparently, the editors want stories to be published in print first and then the  Web second. That may be a  tough balance to strike.

Social media “enables me to compete with NFL and ESPN,” said @Cindyboren of the @WashingtonPost. Twitter levels the playing field for her.

The toughest challenge for  for @RickDunham? Time management, given the need to keep up with updating the Houston Chronicle’ digital outposts and the conversations . Community moderation is unending and necessary.

Rick also made a fascinating point about #journalism ethics and #socialmedia: keeping ideological balance with subscriptions to fan pages for politicians on Facebook is important in the digital age to maintain balance. Reporters need to follow everyone on their beat.

I asked a question about sourcing, as you’ll see if you watch the video. The panel provided good answers. Both @cindyboren and @rickdunham apply classic standards of #journalism to confirm the truth of statements, usually by calling people or  “@’ing the source.” Pick up that phone!

Rick also made a fascinating observation: the Chronicle is  realizing real adverstising revenue by livestreaming confirmation hearings and Congressional town halls to interested readers. Er, viewers.  By carrying such news events on their websites, newspapers have become in effect independent Internet TV stations. Hello, convergence.

As an aside, I learned Helen Thomas is @frontrowhelen on Twitter. @IkePigott made her an account.

Great event. Many new faces, with others now becoming more familiar as I get to know the local DC new media community.

Reblog this post [with Zemanta]

1 Comment

Filed under blogging, journalism, social media, technology, Twitter, video

It’s not about the numbers. It’s about the connections.

Connections
Image by Amodiovalerio Verde via Flickr

Last night, I had a surprise:  my follower count on Twitter dropped by 148 in one fell swoop.

At first, I thought it was something I had tweeted – oversharing about the Forrester tweetup, or disinterest in sharing a clip of Supreme Court nominee Sotomayor. That didn’t jibe, however, with my gut.

What was inflammatory? What had I done that resulted in a huge loss of followers? As I drifted off to sleep, I thought: how important is this, really, in the grand scheme of things?

I’ve long since learned one hallmark of netiquette on Twitter (Twittiquette, if you will) was not to talk about one’s follower numbers. (If only I could retrieve some of the replies I received back in 2007 after doing so, I’d be thrilled. No good.)

A paraphrase of most of them essentially boiled down to this: are you here to get followers or here to connect?

It didn’t take long to see where the real value was. And, more than two years later, I’m elated to look back and see how many marvelous connections I’ve made, many of which have led to friendships offline. Why is that important?

For me, that’s a a simple answer: we live in a number-obsessed culture. Thinks about how many metrics we track, filter and can recall: poll numbers, net worth, MPG, CTR, Web uniques, 0-60 in __, GPA, APR, circulation, P/E ratios, DJIA, TCO, Mbps, R/W speed…on and on.

And, naturally, for those in the social networking world,we count subscribers,  friends and followers. I’ve received far too many messages and spam promising me thousands of followers if I use this software or that service.

Honestly, they all leave me with the taste of fermented cough syrup in my mouth, with a healthy side of cod liver oil.

It’s not about the numbers: it’s about the connections.

Every follower or friend I’ve made has been through a conscious choice or organic growth. I’m proud of that. I’ve done it in what I might term the “new-fashioned way,” using much the same approach that Chris Brogan describes in his Twitter FAQ: “be helpful, share, communicate, use @replies a lot.” I tend to attribute “by @username” or “via @” nearly as much as directly @reply these days but the sense is the same.

Yesterday, I met Josh Bernoff, co-author of Groundswell. I had dinner with Shava Nerad and her beau, “Fish Fishman,” with Laurel Ruma joining in a bit later. I saw dozens of other friends from the local social media scene at two different tweetups.

I shared some groundbreaking journalism tools and advice, like best practices for journalists curating the Web. I shared messages and stories with newsies at the New York Times, Guardian, Wired, Gizmodo, Slate, The Register,The Center for Democracy & Technology and many others.

I read Stephen Baker on what may become of BusinessWeek and Bernard Lunn on creative destruction in publishing

I shared a lovely bit of science fiction made real, via the irrepressible Steve Garfield, watching the latest in augmented reality:

I reviewed my sources, notes and interviews from a conference earlier this week and wrote an article. I enjoyed a two hour workshop with my colleagues, analyzing the strengths and weaknesses of our journalism. I even enjoyed a late night cocktail with someone I love deeply.

In all of that, what does a dip in follower numbers mean? Not a helluva lot.

And, as it turns out, the scuttlebutt that Twitter is doing another purge of spammers and bots, a process that I recall from last year as well. My existential angst was unwarranted, my concern without merit – but the thought process and recounting it led me to was worth it.

I’m proud of my connections and my friends, of the social news network we’re all collaborating upon, and up the quality of the communication within it. I’m glad to bring it with me to Washington in a few short weeks.

The spammers can go live on whatever lower circle of digital Hades is reserved for ’em.

Reblog this post [with Zemanta]

1 Comment

Filed under blogging, friends, microsharing, personal, social media, technology, Twitter, video

Amazon’s Mechanical Turk’s potential for social science, commerce

Today at Harvard Law Schools’s weekly Berkman Center lunch, Aaron Shaw presented into the potential  Amazon‘s Mechanical Turk(AMT) holds for social science and the culture that surrounds it. His talk drew upon research-in-progress from the Berkman Center’s Online Cooperation group, in collaboration with Daniel Chen and John Horton.

Although the presentation itself, cheekily entitled HIT me baby one more time, Or: How I learned to stop worrying & love Amazon Mechanical Turk,” was a bit light on statistics, the conversation within Berkman’s community around the issues of labor laws, privacy, methodology and technological potential were fascinating, as always.

Adam Shaw at Berkman

Aaron Shaw at Berkman

As Shaw noted, the origin of the name for  Amazon‘s Mechanical Turk lies in a chess-playing “automaton” that was no mechanical creation at all, but instead a clever contraption that hid a chessmaster inside. Amazon’s version farms out small tasks — or “HITs” — that require a human to accomplish.

As an aside, I have to note that, as Peggy Rouse pointed out in Mechanical Turk, Powerset and enterprise search, there may be considerably more to Amazon’s strategy than the creation of a crowdsourcing market for simple tasks. She thinks Mechanical Turk may play a role in enterprise search down the road. She’s a canny observer, I’d recommend reading her thoughts.

Early in his presentation, Shaw offered up a shoutout to Andy Baio (@waxpancake) who asked two questions late last year in “Faces of Mechanical Turk“: “What do [Amazon Turk users] look like, and how much does it cost for someone to reveal their face?”

Faces of Mechanical Turk [Credit: Andy Baio]

Credit: Andy Baio, Faces of Mechanical Turk

The aggregated image is shown on the right. $0.50 was the magic price, apparently.

As Shaw noted, however, when it comes to the Turk,  no public, trustworthy, aggregate data is available. What evidence is available derives from self-selecting surveys and experiments. Those samples showed a large number of women, from many countries of residence (although mostly in the US & India). Speculatively, he noted that the age of users appears to be low, while education and income is high.

Shaw posited that the geographically component is likely correlated to Amazon’s requirement that users hold a US banking account.  As a result, Shaw’s research relied upon whatever his team could collect on the Turk or through interviews with users and Amazon executives.

So, does the Mechanical Turk work for its users? Sometimes. Shaw noted that once you get a few people performing a given task, the accuracy rate for completion goes up overall, providing the example of machine-learning algorithms.

As he noted wryly, it’s “Not all bots, cheaters and scripts.”

Task selection and design is important to that success rate: skill matters, on both sides. It’s not just the skill of users and their ability to follow instructions – success also relies upon the skill of the creators of the HITs. Social scientists — scientists of any stripe, really — recognize the issue here in experimental design.

The uses of Turk cover a broad spectrum, though by nature each represents some form of crowdsourcing. Amazon itself used to Turk to generate product descriptions, questions and answers, thereby “spamming itself,” as Shaw put it.

Spectrum of users of Amazon Mechanical Turk

Spectrum of users of Amazon Mechanical Turk

How else is the Mechanical Turk being put to use?

  • The Extraordinaries: “micro-volunteer opportunities to mobile phones that can be done on-demand and on-the-spot”
  • CastingWords.com is using it for transcription
  • AaronKoblin.com uses Mturk to create art. For .02, he pays users to draw a sheep facing left. He then sells sheets of them  for $20, some portion of which is donated to charity.
  • Also noted: oDesk, reCAPTCHA, Threadless, Aardvark, liveops

Aside from commercial, artistic or volunteer uses, Shaw believes that Mechanical Turk has considerable potential to enhance social science.

Specifically:

  1. As a pool of subjects for randomized experiments
  2. As a pool of inexpert raters for distributed observation, or “coding”

Advantages to labs?

Low cost of use, ease of paying subjects, speeds, diverse subjects (potentially), one HIT = one person, workers do not (usually) interact.

Experiments can consist of contextualized real-effort tasks. As the Turk has created a real labor market, as for text transcription, there’s utility in many areas, like canonical games in economics and paired surveys.

In other words, its neither reducible to a manifestation of the “Internet hivemind” or some sort of “latter day child labor,” at least in Shaw’s view. The online conversation around the presentation, which included Esther Dyston, was more skeptical on the latter point, noting that the potential for skirting labor laws was not inconsiderable. Shaw readily conceded that the issue is salient, although he sees such labor issues as “downstream,” he expects to see more given that the “tension is so clear, so stark.”

Shaw has been advised by Yochai Benkler while at Berkman, who evidently considers the Turk to be of use for content analysis for distributed observations. In this context, the ability for researchers to randomly assign HITs for raters to code objects is helpful. Shaw brought up Klaus Krippendorf, of UPenn, in the context of understanding some of the theory here; I’ll need to go do my due diligence in understanding Krippendorf’s work.

Yochai has noted that specific groups involved in distributing computing types, like SETI, have performed admirably. According to Shaw, in fact,“The Knights who say “Nee” perform quite well when measured against other countries with distributed computing.”

I also heard about the “Turkopticon,” a Firefox extension that allows users to submit feedback about HIT creators. Although Shaw said that it is not widely installed, there’s clearly a step towards community self-policing.

When asked about the utility of using the Turk for searching for missing computer scientist Jim Gray or searching for Steve Fossett’s plane, Shaw immediately recognized the value but hadn’t examined the data sets in question at length.

The question itself begged for a follow up, given the release of Chris Andersen’s “Free” this week: How and why are users motivated to provide hits when altruism is involved? Is work of higher quality when there is money involved?

Shaw offered a cautious affirmation, though with reservations: Payment vs free is “such a loaded issue in society. The symbolic value of money or donation is humongous.”

A Berkman Fellow in attendance, Chris Soghoian, noted that his advisor pays 5-10x the market rate and gets email about when the next task is coming, along with decent results.

In Shaw’s view, there needs to be “a more serious examination of the question. Experimental evidence of research suggest sub-populations of people who would respond differently. Some people will be motivated by doing good, others don’t care, want the .05. We need better ways to test. It’s situation-specific.”

As he wryly noted, “We’re not all homo economicus.”

As usual, this was an excellent lunch.You can view the archived video of the presentation as a .mov.

Following the presentation, Aaron wrote me to add the following:

“Daniel and John’s contributions to the field of experimental research on online labor markets include

  1. recognizing that AMT could serve as a venue for experimental studies;
  2. conducting the earliest labor market experiments on AMT;
  3. solving a bunch of difficult problems so that they could make valid causal inference based on the results of these experiments.”

I have to note one other organization I learned about today: “TxtEagle.” TxtEagle is a innovative concept for active “mobile crowdsourcing,” distributing small-scale jobs via SMS and payment the same method. 

In other words, microjobs with micropayments. The mobile platform’s founders recognize that there are more than 2 billion mobile phone users in the developing world that could potentially be leveraged to perform tasks. The BBC wrote that “txteagle is changing the dynamics of outsourcing labour.” Hard to disagree with that.

Reblog this post [with Zemanta]

3 Comments

Filed under blogging, research, technology