Scanning Headlines

We all scan headlines, whether it's the printed newspaper, Techmeme, Huffington Post, Hacker News, Seeking Alpha, Google Reader, Google search results, or Twitter. That's the way we consume information. For any given set of headlines, we might click on one link and read a full story. That's the way I've been reading the newspaper since I was a teenager.

So it should not be surprising to anyone that the same is true online. Arnon Mishkin, a partner at Mitchell Madison Group, has a post on Paid Content where he asserts that:

We did a study of traffic on several sites that aggregate purely a menu
of news stories. In all cases, there was at least twice as much traffic
on the home page as there were clicks going to the stories that were on
it.

What that says to me is one out of every two visitors found nothing they wanted to dig deeper on when visiting one of these link pages. And that may well be true.

But it does not mean that the other 50%, who did click on a link and go visit a story, are not valuable.

I get 52% of my monthly visits from "referring traffic". And another 16% from search. Only 32% of my monthly visits come direct.

So if I somehow took my posts out of the "link economy", my traffic would in theory decrease by 68%.

It's the sad truth of the content business, certainly many parts of the content business, on the Internet. If you are a content owner, the front door to your content has moved to a place you don't control. You can get it back by walking away from the link economy. But I don't see many content owners doing that. It's likely suicide.

As Arnon points out, the better move is to try to become an aggregator.

Consider partnering with other content makers and developing appropriate aggregation sites of their own.

NBC and News Corp did that with Hulu and the results so far appear to be quite good. If the front page of NYTimes.com linked to everything interesting on the web instead of just their own stories, they could play the same game. I understand the organization reluctance to do that, but I wonder if they have any other choice.

Reblog this post [with Zemanta]
#Web/Tech

Comments (Archived):

  1. ShanaC

    This is something I keep thinking about, very deeply. I find the more time I spend involved on the internet, the more irritated I get at aggregators, however I define them as such, because even with human involvement (including mine) I can’t fine tune them enough to get them where I want.The ideal aggregator should be a management tool. Instead it makes life feel more bloated with information, some useless some not….gahI would kill (ie pay for, and yes I mean that seriously) a very fine tuned, socially involved, aggregator + search which is additive. I feel like I am never going to get consistently anywhere, or even a guessit-mate of what I want, if the process is not additive in a much better sense, and if it doesn’t guess over time…My aggregators can never tell me when I need to get another blog, or news headline, that will be important to what I am doing in my life. Or when it should be disrupted by say, the United States sending a Senator to Myanmar…It’s frustrating..in theory something like twitter should cover all bases, in reality, this never works because it moves too fast, and you don’t want to be attached to your tweets.Further, something that always shocked me about aggregators as they stand today- they don’t all manage the push outward equally. With all of these social netwarking blah blah blah stuff, you would think if you can add them into your aggregator, you would want to push out from your aggregator equally too- this is not the case….So you end up having to scan, then leave. Pain in the neck.

    1. William Mougayar

      Shana, You hit the nail on the head re: “…aggregators not pushing out equally”. Total neutrality and comprehensiveness has been a day1 objective for my business model (we’re a customizable super-aggregation platform). The user should be able to get a true 360 view on a given topic without bias, but with social rank data. But to have such a fine-tuned aggregator, you need to start with well defined topics (as a starting point). Then a composite of your topics can be created for you. The aggregator itself would worry about new stuff and content, not you. We have a few such topics already here: http://portal.eqentia.com.

    2. fredwilson

      I’ve seen literally hundreds of attempts to build the perfect aggregator and nobody has done it yet

      1. ShanaC

        There is no such thing as the perfect aggregator, but I’ve seen some great ideas out there because they switched from “gee you need to aggregate to get good stuff” to “gee aggregating makes getting specific tasks easier, hence I should aggregate.” If more models were built on the second, we would be a lot better off. I rather use a ton of very specialized aggregators with a nice general news ticker at the top, than be in the state I am in today…http://www.shanacarp.com/es

      2. William Mougayar

        What should the perfect aggregator do?

        1. ShanaC

          Help complete a task that can only be completed when one is shown a wide variety of similar information. It should help narrow down choices, and make all of those choices easier to complete, not make more and harder choices!!!Changes aggregators from a generalist tool to a specific use tool and they will be so much more productive because computers just do not have the means to deal with our moods( unless they become moody). It will become a choice then of changing tools, not biting nails and working so hard to make a too broad tool always work (it won’t let’s move on)

        2. fredwilson

          It should know what I want to read at that moment without me doing any work to configure or tell it anything

          1. William Mougayar

            I knew you would say that 😉 . I think we agree on the end-point, but the trick is how to get there: whether it’s via pure learning from the user, or aided by a kick-start of sorts.

          2. Andraz Tori

            User’s browsing history might be a great learning set for the aggregators, but are you willing to give that data out?OTOH it seems that ‘social aggregation’ is faring much better than fully automatic one, most of the web communities function is ‘social aggregation’ of data.byeAndraz Tori, Zemanta

          3. fredwilson

            Hi andraz. I give out my browsing history right here on my blog. Its on the blogrollr widget. Unfortunately my daughter borrowed my laptop and has been visiting fashion blogs!

          4. Alex Popescu

            IMO historical data is just a small piece of the whole solution. And Fred is presenting the most basic example for easily breaking its usage.The more generic problems/questions to be solved about historical data is:- what parts of the historical data are relevant? (this is probably one of the reasons APML hasn’t been adopted so far)- how should the system rank historical data considering the possible changes in user’s interests over time?- how should the system use historical data when the user is operating in “short term interest” mode? (looking for the last game results, comments, etc.)bests,./alex

          5. vincentvw

            Isn’t it more a matter of contextual relevance? What I mean is that news can be relevant to a person, because it’s about a certain location (what I imagine that company FW invests in to do, is it called OutsideIN?), to be consumed at a certain time (e.g. when the markets open), or of global interest (e.g. 9/11 or global pandemic)?It’s human nature to browse around for no reason at all, which is what many people do, and no wonder that “(not so) smart” aggregation, basing suggestions only on the past, get confused. Group aggregation is kind of effective, I guess (e.g. techmeme), but it’s also often not directly relevant to your every day.To me, the perfect aggregator collects news that is relevant to my context, and when you really get down to it, that means that Google search is about as smart an aggregator as is needed.

          6. Pascal-Emmanuel Gobry

            The problem — and the strength — of social aggregation is that everything depends on the community. If you have the right sort of community the best news comes up and if you don’t the worthless stuff comes up.The best example of a successful social aggregator, in my view, is Hacker News, while the worst is Digg, which has become completely useless to me as it went mainstream. Not that I think there’s something wrong with “going mainstream”, on the contrary, but there’s something about the *way* Digg did it that let all the good out and all the bad in, and now whenever you go to Digg the only things that come to the top are lolcats.Which is why I thought Reddit open sourcing their platform was such a great idea — I think what you need is a sort of “Ning for News”, where there would be infinite potential for niche communities to come together around a smart technical platform for aggregation. But Reddit never really pulled it off, probably because they got acquired too soon. Then you could build “meta-aggregators” around specific verticals — tech, politics, etc.Perhaps today the best potential for a smart social aggregator might be Twitter, especially combined with Bit.ly and other services.

          7. fredwilson

            I think it can be built on top of twitterAgreed about hacker news. What a great service and community they’ve got there

          8. Lyn Headley

            Hi Pascal,Have you seen the Hourly Press? It’s an aggregator based on an authoritative social filter that aims to circumvent the Digg problem. It gets its links from twitter. (I’m involved).http://hourlypress.com/The first instance is News about News.http://newsaboutnews.hourly

          9. Pascal-Emmanuel Gobry

            Nope. I’ll check it out. Thanks for the heads-up.

      3. Alex Popescu

        This is the field I’ve been working and experimenting for a couple of years already and I’d say there’s no such thing as the perfect aggregator:1. Our interests are normally very wide (f.e. I consume content related to startups, technology, software architecture, but also a bit of sport — but not about all sports, politics, global state, etc.). Basically that means that vertical aggregators will just be able to provide “locally” optimized content (as in math local optimum), while generic aggregators will be facing an even bigger problem: filtering through a huge amount of data.2. Another problem faced by aggregators is if and how they balance quality over timeliness. There’s content that is timely, but there’s also content for which the quality matters more. To make things even more complicated: a) for some of us timeliness counts more than quality while for others is exactly the opposite; b) there isn’t one content type of content vertical where you can say that it falls in one and only one of these types3. Last, but not least, the way we are consuming content is mood-driven. And even if there have been attempts to quantify the mood-based consumption patterns using historical data, there’s no guarantee that it will work on a daily basis (simple put: even if for the last couple of weeks, I’ve spent more time reading about startup funding, this doesn’t mean that today my top priority wouldn’t be tech topics).4. Historically, we’ve been using a search-based content consumption model (and most of the time we’ve been worried not to lose “important” content). But due the exponential growth in content creation, I think that we will have to move towards a recommendation-based content consumption model (there’s a lot more to say about this and unfortunately I don’t think it will fit a single comment).And there are quite a few more problems that makes me believe that there’s no such thing as a perfect aggregator.Anyways, I’m one of those that have always believed in what Dave Winer formulated so well: “the fundamental law of the Internet seems to be the more you send them away the more they come back” (and I’ll continue to work and experiment for improving the online content consumption experience).

        1. fredwilson

          I agree with all of these challenges

        2. ShanaC

          So would a task oriented aggregator at least get around some of the challenges- switch tasks, you switch aggregation tools, and we stop trying to make everything a one stop shop? Or would that make the problem worse, not better?

          1. Alex Popescu

            Shana,This is an interesting idea that I’d like to think more about. Would you mind giving a couple of such “task” examples?In case such a task is: “find out the details of the FriendFeed acquisition” I think this will remain in the search field (or if it is a longer term “task” then alerts are probably the way to go)../alex

          2. ShanaC

            The better question we should all be asking ourselves is what purpose does aggregating serve in this instance?I wrote about it in the case of Blip.TV, I think their aggregator solves one problem for their user base- brand management. However that aggregator assumes a high level of text and interaction. Its goal is to easily create communities for tv show creators where their consumers hang out. Not every aggregator needs to have that goal.

  2. Chip Griffin

    I suppose the real question is which type of visitors click on the most ads or otherwise take actions that increase the profitability of the site. If you don’t find a headline to click, do you perhaps click an ad instead? Or are the home page reloaders your loyal visitors who end up linking to the site and generating traffic (and subsequently revenue). Do “destination clicks” end up going back to where they came from without bookmarking the site for future traffic or clicking an ad to generate direct revenue?The headline scanning numbers are certainly interesting, but they are only part of the story. It’s sort of like my admonition to people concerned with traffic to their blog. Quantity is only part of the story. You might only have 3 readers, but if their names are Bill Gates, Warren Buffett, and Barack Obama — and they like what you have to say — it doesn’t matter. Same with site visitors. Raw numbers only tell part of the story. The revenue those readers help to generate is really the important thing in assessing the business.

    1. fredwilson

      Its certainly true that quality is as important than quality. But in a cpm world, quantity matter a lot. Maybe that’s why cpc/cpa is growing and cpm is not

      1. Chip Griffin

        Even in a CPM world, however, the rates vary greatly depending on the quality of the audience. Eyeballs are not all created equal. Ultimately, whether you pay by CPM/CPA/CPC or something else, you are trying to reach a certain audience and achieve a particular outcome. Eyeballs get publishers noticed by advertisers. The quality of that audience keeps them (and their agencies) coming back.

  3. William Mougayar

    That’s the dilemma facing most large publishers. Their so-called aggregation of topic pages are not comprehensive, and biased towards their content or network-partners content. Those who bite the bullet and offer truly comprehensive aggregation from outside of their walls will benefit from it. As Erik Schonfeld said a few days ago, the vertical integration model of the newspaper is dead. Their content needs to be dis-integrated and exploded.I think there’s an emerging battle brewing in aggregation and curation models.

    1. fredwilson

      I agree

        1. fredwilson

          Yup. Good post

  4. RWK: disruptive tech/guerrilla

    Fred, yours is precisely my point to the nytimes. Ofcourse, buying a paper due to a header on twitter isn’t technically a “click through.” Maybe a “pick through?”Regardless. I contend that most … 70% of us??? … walk by a coffeeshop or book store that carries the nytimes within 2 hours of seeing a tweet. I know I personally spend 6 hours a day staring at the nyt rack from the hard wooden chairs of the santa monica or spring st. Soho starbux.Oddly, I never Ever see a tweet from the times that makes me walk five feet and spend five dollars. The papers don’t have to create an agregation network. They just need to learn how to use twitter.

  5. Mark Essel

    I don’t think I’m a fan of master/ servant economies, and content creation is playing for free to aggregators. What is an aggregator afterall but a big bin for various content. Search is the ultimate aggregator. Great content creators should be rewarded just like great aggregators.How about a dynamic model where aggregators pay a share of revenue generated to original content generators? Too hard to implement? Just use DNS to send paypal daily to site owners. The challenge is identifying the originating owner of the content. Associate a unique time stamp and encrypted ID watermark equivelant into any content.

    1. William Mougayar

      Mark- It’s the other way around. One could argue that original content generators should share revenues with the aggregators if aggregators are sending more traffic their way (which they can hopefully monetize). Remember the WashPost/Gawker case 2 weeks ago only, where a Gawker pick-up of a lame WashPost story gave it limelight, hits, traffic, comments and fame for its author and editor. None of that would have happened without the online clout of the online aggregator- in this case, Gawker.

      1. ShanaC

        I’ll argue and argue again- it’s the community around content that is valuable…I know Fred gets very frightened when I say this; the market is going to pay for community, pay to comment, pay for verification. We want to know we have the authentic. I know that here, at least for a very long time, it will be free- content in general apparently is something created by a community of people. And the overloards of the brand want to control it, they are going to have to make people pay to have their name hooked onto the brand, just like I have to pay for a pair of Bensimon sneakers (note: I do not own bensimon sneakers, just think they are hot…)Argreggators are part of that economy, in that they control the lens in which we see what brands first, and how we rank them, especially as aggregators technically get better (I suppose they are going through the awkward growth spurt period now). Whether the majority of us realize it or not, I believe this is one of the top ranking VC blogs, because of the power of branding. Content in these comments and in the blog itself could be seen as more valuable than in the lower ranks…

      2. Mark Essel

        I guess things are converging on a hybrid sharing model between content gens and aggregators. That keeps all parties working to make the best user experience, and getting fairly rewarded for the value they create.Thanks for the counter view William.

        1. William Mougayar

          Yes, I think they both provide different value, and the irony is that they need each other- a Ying & Yang. It would be good to see some official reconciliation between the 2 camps, away from the current animosity and finger-pointing.

  6. Pascal-Emmanuel Gobry

    Wait, 32% of visits are direct so if you removed yourself from the “link economy” your visits would drop by 85%? Shouldn’t that be 68%? What am I missing?

    1. fredwilson

      Nothing. Bad math on my part

    2. Mark Essel

      Fred’s just great at trend prediction, and is extrapolating a few months in the future ;).I was trying to follow the numbers and assumed I wasn’t following the teminology. Thanks Pascal.

      1. Pascal-Emmanuel Gobry

        You’re welcome.Yeah, the trend is right, in reality he would probably lose more visits overtime.

  7. the build

    To really determine the value of aggregation in terms of traffic, session times and repeat/return visitors data would be important. It’s too easy to game traffic including uniques and page views so what the traffic does and how much becomes the story. So far there haven’t been any discussions on this in relation to the real value of aggregators so I’m reluctant to accept newspapers, sites, etc. are losing something by leaving them. Aggregation can be a way to draw audience to a platform but so can SEO and SEO hasn’t proved to necessarily create real audiences that use and return to a site. There are tricks to creating the idea of traffic, but if you really want a member base, this is not the way to go.People overemphasizes traffic and expectations of what sites should be bringing in are really out of whack. It’s not in line with what’s proven to be true in content and platform business over the course of time.

  8. Tweet Feeds

    interesting…we see the world so much from what we know, we currently have what seems to be spam for news because of ineffective aggregation

  9. Facebook User

    This also points to the importance of recirculating traffic within your site once someone lands on an article page.I looked into this a while ago, and of all the people who visited the NY Times in that month, only about 20% had visited the front page even once during that month. For the Washington Post, that number was even lower — at 14%.http://blog.agrawals.org/20…And those are two of the biggest global news brands. I suspect that the numbers are even worse today.

    1. TheNewPowerGirls

      What analytics sources are used here?

  10. Jonah Peretti

    Amazingly enough, most major news sites do not optimize their headlines to increase click rates.

    1. fredwilson

      That’s an interesting point Jonah. Are there tools to do that or is it an art form?

      1. Facebook User

        Here you get into a conflict between search and social media.The NY Times had a piece on this a few years ago called “This Boring Headline Is Written For Google”http://www.nytimes.com/2006…The optimal headline from an SEO perspective would be generic, loaded with keywords. Cuteness, wordplay, allusions don’t go very far.But it’s just the reverse for social media. There, you want to write something that clicks with a human.

        1. fredwilson

          I never write a headline with seo in mind

          1. Mark Essel

            Here the robots and machines need to learn from humans. If many humans select a link due to it’s title on social media, the search engine should capture these “votes”.Matt Cutts may have some insight on this from the google sandbox, he said they’re doing some pretty major overhauls to shift the big G close to real time.

  11. Dan Marsh

    Consistently good, in-depth content sources will be rewarded with a bookmark in my browser.Aggregators like Techmeme help me filter out the noise. Keep the noise off your site, and I will be your direct traffic any day.

  12. omichels

    To me, the last two sentences sum the whole thing up:If the front page of NYTimes.com linked to everything interesting on the web instead of just their own stories, they could play the same game. I understand the organization reluctance to do that, but I wonder if they have any other choice.This crystallized some thoughts I’ve had on how production and distribution of news are destined split. I believe the news business will begin to resemble the ecosystem of the motion picture business.If the news business is to survive, production and distribution must be decoupled. The strong national brands will likely be able to play on both sides, just as the major motion picture studios do, but even there the production and distribution divisions will need to be autonomous.Meanwhile, we’re going to see new business models in which newsgathering and production can be done efficiently and profitably, and distributors will figure out how to make money while providing the necessary economic incentive for the producers.News producers who are opening APIs and experimenting with a variety of syndication business models are likely to lead the way. Those that aren’t are sitting on the sidelines and hoping that when the innovators “crack the code” they’ll still be able to jump in and join the party. By then, it will likely be too late.More here:http://www.praxicom.com/200

    1. fredwilson

      Great comment and post oren

    2. juepucta

      The NYTimes site already did that whole linking to outside sites – recently actually. Although, i think they already killed the idea. For the life of me i cannot remember the stupid name of the project, the personalized online edition of the paper (Times Extra?). These days their technology and money sections aggregate. Not argueing against your point, in any case. Just pointing it out.-G.

  13. Chris Phenner

    I think omichels’ post above implicitly re-frames the ‘link economy’ as the ‘API Economy’ and rightly de-couples production and distribution. For real readers, links are lame, and here’s why:A ‘link’ requires a click-through, and click-throughs are slow. It reminds me of Fred’s post about streaming overtaking P2P file-sharing — it’s getting faster to stream. Similarly, ‘no click delivery’ of well-targeted content may compel users to pay for (full-length) content.APIs are a faster and more measurable way to deliver full-length content to licensed aggregators. If you know anyone that pays for ‘free’ blog content via their Kindle, that is a statement that speed (no clicks) and context (e-reader delivery) matter more than freely-available links. If you consider the forthcoming number of ‘end points for reading’ (slew of e-readers, netbooks, iTablet, PC-less printers and smart phones), device-sensitive targeting (sans ads) is important for aggregation.For skimmers, scanners, grazers and snackers (online), the link-based economy may be fine and worth the time that click-throughs require. But RSS/Digg/Techmeme requires two clicks from headline to full-length content, with pages littered with ads and additional page clicks.Licensing by aggregators that pay content producers a direct share of value dervied from READING (not click-throughs) strikes me as a brighter future, to omichels’ point. It will also avoid a future where a so-called ‘wadget’ (puke-in-mouth) is used to monetize aggregation, as suggested at the bottom of Mishkin’s post on paidcontent.

    1. ShanaC

      Here is the question- how do you know what should get there? How do you know who is up & coming, who is a good old fashioned regular, and who is a one shot wonder who needs to be there that one time without taking a llok around through the links?At the end of the day, of course we all want that perfect newspaper for ourselves, but how do we figure out who is reading at a given site? We all want to be direct, but we built the web in such a way that isn’t by nature direct- it is indirect.

    2. fredwilson

      Hmm. A move back to licensed content? I’m not sure about that. It has so much overhead

  14. Adam Berkan

    If the newspaper companies could just agree to link within themselves, with everyone linking to original ariticles, the existing link-culture would automatically move those stories to the front of search results. Instead everyone rewrites AP articles and randomly one of them gets to the front of the search, and it grows exponentially. (Well not really randomly. One will have the most inflamatory title, so it will probably win.)If they linked more broadly, I think it would only help.I hope a few papers figure out how to run in the internet age before they all go bankrupt…

  15. michaelcader

    Per your earlier question in the comments, our experience is that good aggregation is an art. And the value to readers isn’t just in the aggregating but what you select and how you present it. You also need to understand whether your particular audience is a) looking for interesting/relevant things to read or b) looking for aggregated relevant/information without having to click through and read further.We micro-aggregate within our field (book publishing professionals) and slice/curate sub-sections of aggregations.But at the top level, we select the most relevant/interesting stories from hundreds of feeds, and rewrite the heads to convey as much information as possible in the link line. In an information-crowded world, helping communicate bullet points of interest but saving you reading the whole story is quite valuable. The better we do that, the less our readers need or want to click-through for the full story. (Most people tend toward the opposite; a provocative/teaser head that’s designed to elicit ctrs.)

  16. Shawn Hickman

    I love your idea for the NYTimes to link to content that isn’t their own. If they did that, I would be more likely to use their site. They could also use some help with their homepage design. Too busy for me.

    1. Dave Blanchard

      This touches on an interesting point. This discussion is, rightly so, focused on the difficulties of getting the right content into the right place (and the right time, etc). However, given that set of challenges, how much do we think that how the content is displayed have to do with the usability of the aggregator?Surely the future is not listed headlines RSS style, but something that clearly communicates what we should prioritize and why we should read in a format that lives beautifully across all of my devices. Currently, it seems that most solutions are focused purely on content.

  17. Chris Motes

    Quote of the day: “the front door to your content has moved to a place you don’t control”

    1. fredwilson

      @qotd baby!

  18. Shreenath Regunathan

    Thanks for a very interesting thought piece; my view is that this is a bit like the Netflix algorithm challenge- how do you really figure out what you want to read next (for Netflix, change read to see!).People have varied and diverse interests and trying to home in on what you want to read is all the more challenging since your capacity for content on some topics is a breadth play (news on random events around the world, maybe events in your city) or a content play (things that interest you, maybe formula one, soccer news on your club, cooking etc.). The holy grail of this is a bit like stumbleupon paired with your interest profile from facebook/delicious or otherwise).Do you want something/one/site to actually have you figured out? does that change your reading behavior just to push the limits perhaps and slowly change your very approach too? Sometimes the most interesting things are way beyond the scope of your “normal” reading!SR

    1. fredwilson

      I think my clickstream, my various social nets, and a bit of stumbleupon together would get a lot of that done

      1. Shreenath Regunathan

        True that; but do you think your reading is that predictable/expected!

  19. Robert Hacker

    The notion of becoming an aggregator of content is why FaceBook bought FriendFeed. http://bit.ly/4CAKo2

    1. fredwilson

      I don’t think so. I believe it was a ‘human resources’ acquisition

  20. Carl Rahn Griffith

    I don’t see this as necessarily being a problem, Fred – maybe it’s because we are still conditioned to think of how we scanned hard-copy newspapers?Whilst we may think (through rose-tinted spectacles) that we ‘read’ the entire newspaper in fact what we typically did (or ‘do’ if still a patron of hard copy news, like myself) is only drill-down into a small percentage of articles. From a headline we assimilate very quickly what stimulates us and so what warrants our further time to read in more detail.Only a given number of stories at a given time can truly engage us – depending on our frame of mind at the given time, attention-span available, external influences, motives, etc.I’m happy to sacrifice some time to skim a given % of headlines knowing this is part of the process of identifying/filtering what’s really of interest to me. If all my headlines were so finely tuned to me I drilled-down 100% of the time I’d be a tad concerned that I was missing out on other news/articles that required me to decide on whether they interest me or not.With ‘perfect’ headline tuning resulting, there is also the risk of losing that greatest of delights – serendipity.The more we aggregate, the more headlines we have to scan and decide upon, and all in the blink of an eye. If an app can (over time) ‘learn’ from what the reader is actually drilling-down into it can begin to fine-tune the results on any given topic – as we are trying to do with ensembli.But, in my opinion, we must never lose sight of the fact we’re talking about words, not data – the tuning of words/their nuances and the discovery of news/articles is something we should take delight in and not be frustrated by.

    1. magickseeker

      Very fascinating discussion. It reminds me of the debate between publishers (social gate keepers) and authors (providers of content).

    2. fredwilson

      Great point on words vs data. That’s a big deal carl

    3. fredwilson

      Great point on words vs data. That’s a big deal carl

  21. davidmattia

    great comments. Everyone seems to agree that (content consumer) Choice is the dominant trend, and others see how production and distribution should be separated but why is the next step so hard? I guess because it has never been practiced by content related businesses before now. The “win at any cost” social period we just flew through for years is receding just as social “trust” is being examined in every part of our collective lives. Think of it as a pendulum swinging. How might media enterprises reflect this social need ? Having come from several, I do not think the present culture at most old media outlets will support the kind of change needed because their needs are overwhelming their senses right now.

  22. ShanaC

    I’d borrow those parts of her browsing history in a heartbeat.Especially if she has a good fashion aggregator. Specialized and good fashion advice is harder to come by than tech advice, especially if you are female…

  23. jarid

    >>If the front page of NYTimes.com linked to everything interesting on the web instead of just their own stories, they could play the same game. I understand the organization reluctance to do that, but I wonder if they have any other choice.Interestingly, they already do “Headlines From Around the Web” with their Tech Update email. For example, today’s update had links to Techcrunch, Engadget, and TUAW:http://www.nytimes.com/inde…It’s not quite what you’re suggesting, but it’s a step in the right direction.

    1. Phillip Baker

      The NY Times has also already started to link to other sources from its front page, at least as a test (they are asking for feedback). There is a link under the masthead for Times Extra which inserts several links under story excerpts. It’s not pretty, but it is there.I love that the Times has so many experiments out there and I don’t think it gets enough credit for trying most of the time. But, I also fear the way many of these experiments are released – by tip-toeing around the main site – will negatively affect the results it sees and what it learns from each one.