It Is All Spam
My friends Jeff Jarvis and John Borthwick wrote a thoughtful post about the fake news issue and put forward fifteen suggestions for the platforms and news organizations that are struggling with it.
This suggestion got my attention:
Create a system for media to send metadata about their fact-checking, debunking, confirmation, and reporting on stories and memes to the platforms.
It reminds me of the efforts in the email sector to create metadata around email messages to help the mail platforms identify what is spam and what is not. Examples of such efforts are DKIM and SPF.
If you think about the guts of the Internet, you have these simple protocols like TCP/IP, HTTP, SMTP, etc that allow information to flow from one computer to another. These systems are inherently open, often radically open. Anyone can publish anything to anyone. That has largely been a good thing because it has allowed an open communication network to develop globally without a lot of interoperability worries and work.
But when you do that, you allow all sorts of bad things to happen. And the people who build and manage internet technologies have been trying to figure out elegant solutions to all of this bad behavior for the past twenty five years (or possibly longer).
For me, the first example of this was email spam. And then search spam. Email and web search were two of the first wide open systems that were plagued with all sorts of bogus information and messages. And twenty years later, these two systems have largely been cleaned up through massive investments across multiple dimensions.
So when I hear of some new bad thing like fake news, I immediately think of spam. And I think of the things that have been done to manage and mitigate spam. There is a roadmap for mitigating and managing this sort of thing. It seems like we need to replicate it around fake news. And we should.
Wow, that was a well written and detailed post by Jeff and John. I think what preceded this new wave of fake news is that we let content farms thrive for too long, and accepted that. Content farms are like the content garbage of the Web. Link baits and attention hijackers are the bad news.
Specifically to Fred’s point about the metadata piece and our conversations previously about Semantic Web … and Albert may also be interested (given his arguments wrt Knowledge Economy) . ..Fred noting the “massive investments across multiple dimensions” needed to solve email spam and search” is applicable.The metadata that can train AI to understand the subtleties of language, to detect WHY+HOW a piece of content is fake, valuable or otherwise …Needs massive investments across multiple dimensions. And none of the techcos currently have the right metadata or algorithm frameworks to solve it. https://uploads.disquscdn.c… https://uploads.disquscdn.c… https://uploads.disquscdn.c…
@fredwilson:disqus — The metadata issue is key. Hopefully, Albert also reads this post because it connects with his Knowledge Economy thinking.In July 2010, Google acquired MetaWeb in a private sale and that became the basis of its Knowledge Graph.In July 2008, Microsoft acquired Powerset and its metadata structures for an estimated $100 million to power Bing.Facebook created its own metadata underpinning its Social Graph.From Wikipedia: “One of the earliest known forms of the social graph was created in 2002 by Harvard student Philippe Bouzaglou in a paper published on the Harvard Department of Economics website. The paper replicated the Kevin Bacon Game using Harvard students and for the first time, gave an overview of an entire social graph, allowing the analysis of the characteristics of the network using graph theory. This paper was written for a seminar that was attended by Dustin Moskovitz who later became a Facebook co-founder.”When we look at Marvin Minsky’s 1991 symbolic vs connectionist (probability) structures for Neural Networks (aka Deep Learning), we see its influence on RDF and FOAF meta taxonomies of today. https://uploads.disquscdn.c…Likewise, in MS’s Concept Graph which was released at the start of Nov 2016:* https://concept.research.mi… https://uploads.disquscdn.c… https://uploads.disquscdn.c…What Election 2016 shows is that these metadata taxonomies and the algorithms that run them are NOT adequate enough to separate “fact” from “fake news”.To all intents and purposes, the Knowledge Graph, the Social Graph, Microsoft’s Concept Graph, IBM Watson’s Cognitive Graph etc. are all objective “fact-based” classifications.And yet … they all failed to detect or filter the “fake news”.That means there’s something fundamentally missing that needs to be invented at the meta-level.And that missing system can only be invented by applying first principles reasoning.Moreover, the invention is so vital, it’s like the Knowledge Graph, the Social Graph et al are the Classical Newtonian Model for data, AI and economics —When what we need is the Unified Model that somehow makes the Classical Newtonian Model work with the Quantum Relativity Model.The hardest of hard problems across multiple disciplines… HOWEVER …Someone somewhere may just be “mad scientist+artist” enough to have a shot at inventing it.LOL.
Design and engineering practices, incentivized by meeting investor targets, have all played their roles in the link baits and attention hijackers problem.These design and engineering practices have then been amplified by the inadequacies of the metadata and the biases of NLP frameworks like Word2Vec and skipgrams which means none of the algorithms can understand our natural language expressions online.In yesterday’s ‘Wired’: “Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.”* https://www.wired.com/2016/…@fredwilson:disqus — This is why, whilst Github and Stack are both incredibly useful communities and tool sets, there is still a lot of heavy-lifting and invention which their developers don’t (yet) have the skills to do.Getting the machines to a point where the metadata is comprehensive and can differentiate between “fact”, “fake news” and “subjective, open to interpretation opinion”; where the NLU frameworks are representative and coherent; and where the economic models encourage representative values distribution (monetary and knowledge) is a bigger and harder problem than Blockchain.It does need massive investment across multiple dimensions.Importantly, it needs to have a different approach from Blockchain from the outset. It must put everyday folks at the heart of its design. Not engineers. Not miners. Not cryptographers. Not database architects. Not bankers.We’re talking about humanistic systems whereby it’s everyday folks who define and tune the data signals and the algorithms rather than the algorithms feeding them noise.
Thanks and please see my addendum comment to the one that embedded FOAF structures of the Knowledge Graph.The meta issues get to the very heart of why and how the AI is currently incapable of understanding our human language, values and cultures.So not only does the distributed protocols, dynamic db and digital currency side of value have to be solved (aka via Blockchain-Bitcoin, Ethereum, OpenBazaar and others), the human language of value also has to be measurable and recorded at a meta-level.
I would track and check first the fake news business model. As long as fake and cheap news attract clicks and taps, it will be hard to stop it. My guess is that the ad business is indirectly behind it, they should take some action and responsibility too. For example, ad clicks originating from pages containing fake news (to be determined later) can be discounted.Anyone browsing without ad blocking knows that 50% of what they are seeing in their screens is crap, whether it is out of context advertising or quasi-fake, recycled and outdated content.Perhaps it is time to focus AI no only on the potential customers and more on the sources.
The fake news business model looks like this.https://www.washingtonpost….
Thanks! Now let´s see who write their checks and we got a full circle. I guess we will discover nothing resembling a partisan conspiration, just business as usual.
Keyword filters and lists of blacklisted (dodgy) sites have been an imperfect yet efficient way of dealing with spam. The whole move of FB marking “Verified / Unverified” is simply FB algorithms referencing the news sources according to those blacklists — so not particularly innovative.For the next phase, we really need to enable the machines to understand natural language and as MIT Technology Review points out: we don’t know how to built it.* https://www.technologyrevie…It will be increasingly important since ‘Humanity and AI will become inseparable’* http://www.theverge.com/a/v…So we have to somehow transform our values, ethics and qualifiers (how we understand the “Whys”) into qualiquant representations that are readable by the machines without losing any of the context of those values, ethics and qualifiers.What is news to one person is propaganda to another. And INTERPRETATION OF CONTENT is something the machines can’t do well enough.In Oct 2012, I asked Amit Singhal (then SVP of Google Search): “Does and will Google Star Trek engine have a heart?”He replied: “Wow, that’s very deep for a first question … No, we’re sticking with facts and figures … Things that are measurable.”Fast forward three years and, in Nov 2015, in a ‘Time’ interview, he said this: “Meaning is something that has eluded computer science. Natural language processing—or understanding what was said—is one of the key nuts we will have to crack.”* http://time.com/google-now/When we have hearts, we know WHY something is right / wrong, how truthful something is in our minds’ eye, what we believe in, stand up and fight for.Our hearts give us values.Now, does that mean we should be putting emotion buttons everywhere?Well, Facebook and Twitter both did that in Nov 2015 and those emotion buttons were no better as filters for Trump-bots, trolls or the click bait content.So … I didn’t throw Amit Singhal of Google that curveball in Oct 2012 for no reason …LOL.
I was looking at Gartner’s 2016 hype cycle chart the other day and I noticed “Affective Computing” climbing the first hill, it reminded me about you and your team. :)Do you agree with the definition?http://www.gartner.com/news…
Yes, :*), when we did that emotion recognition app to gauge people’s reactions to our role-play of Trump’s speech of “I’m banning all Muslims. That was using the Affective Computing tools of MS Cortana.Affective Computing originated with Professor Rosalind Picard of MIT’s work, published in 1995. https://uploads.disquscdn.c…It’s since become multi-device and combined with research from other disciplines within HCI and Daniel Kahneman’s Behavioral Economics frameworks from 1972 onwards, including Quantified Self: https://uploads.disquscdn.c…Emotient, acquired by Apple, is an example of Affective Computing’s commercial applicability:https://www.youtube.com/wat…I’d say there are innovation triggers that haven’t been included on Gartner’s hype cycle.Some of those innovations are within Neuroscience and Quantum Physics. And they fundamentally change our existing assumptions about whether human language, intelligence, perceptions and consciousness is binary and probabilistic.And, therefore, how we design for more humanistic systems that serve us better.
My huge issue here is who decides what is fake? Poltifact, for example leans mightily left and is crazily biased. (I can list numerous examples if folks are interested). Similarly, as is well documented, the WH orchestrated an entire echo chamber funded by leftist organizations to support the Iran deal. Is that fake news? Finally, we have examples of this weekend – Reince Preibus says, explicitly, “We’re not going to have a registry based on a religion.” yet the headlines that get tweeted out written state EXPLICITLY THE OPPOSITE. I ran Yahoo News during the 08 election cycle. The media IS biased. I think fake news is a red herring as a reason for the results of this campaign (see Bloomberg editorial on this). In any case, I personally I do not trust that any such system will be devoid of bias, in terms of whom makes the ultimate determinations as what is the metadata, what is fact. This entire election has shown the insane effect of confirmation bias on both sides. I think that bias be in the DNA of any such system. It’s a huge step to 1984 if such a system is implemented.
I can list numerous examples if folks are interested.I’m interested.
Easy place to start is here (also biased, but that’s the way this stuff goes): http://www.politifactbias.com/ and here: https://www.google.com/webh…. Here’s an example of NBC News bias: http://dailycaller.com/2016…
oi. Read through that. Two biases don’t make a truth.
Could not agree more. But someone’s bias is going into what is truth and what is not truth.
AI researchers are currently trying to teach “common sense” to the machines. Here’s how important common sense is:https://uploads.disquscdn.c…Those AI researchers trying to teach common sense to the machines are themselves:(1.) Full of unrepresentative biases in the way they decide how to weight features and attributes* https://techcrunch.com/2016…TechCrunch: “Erasing bias from databases is the key to creating impartial machine learning algorithms. But creating a balanced database is by itself a complicated feat. There is currently no type of regulation or standard governing the data that is used to train machine learning algorithms, and researchers sometimes use and share off-the-shelf frameworks and databases that already have bias ingrained into them.”(2.) Unaware that BINARY LOGIC AND PROBABILITY TREES SELF-REINFORCE CONFIRMATION (VALIDITY) BIASES.
Just like someone pointing to Snopes as the “truth” to debunk something posted somewhere on the internet. Snopes is definitely biased. We all are biased to some degree, but flatout purposeful misinformation (or lying) needs to be called out somehow, just not sure how…
Fake news is certainly being used as the scapegoat in the liberal media for an election result they did no like. but it is also a thing. i would not throw one out with the other.
Agree – Fake news is real. I do not believe any attempt to algorithmically eliminate it will work in bias-free manner
we figured out how to filter out porn spam
Fake news is a lot more challenging to filter than fake breasts.
But do you know fake news when you see it?Not being able to proactively define it it one thing. Not being able to reactively point it out is another and makes it much harder task to filter.
Agree see my post. It’s a really tough problem.
I agree completely with both points.I think using it as a scapegoat is a shame because it means you don’t really examine the real reasons.But the problem is that fake news isn’t like spam. It’s nuanced. And mis-filtering it has grave consequences.For instance there is a post here that said the play Hamilton said whites need not apply. I thought that must be fake.I thought that couldn’t be right and I researched it. There certainly is a NYPost Article with that headline. Now the casting call advertisement did say looking for non-white (in all caps). So is that fake news?One could say yes. Hamilton certainly did not post that whites need not apply.But one could say no that is what it meant.I agree I don’t want blatant lies posted but the question is where does that stop? I think for spam people universally don’t want it, some people think that Hamilton ad is highly offensive, some think its just giving people a chance.And there is another thorny issue which is should companies like Facebook and Twitter be neutral? In one sense they could do whatever they want (they’d lose some people) but in some sense they are almost a public utility (not funded by the government)
Coming from audience development, general news is as biased (and often less so) than the readers in order to gather them.Lose credibility, no readers, so there is that check.I think we’re talking about the degree and acknowledgement of bias, and how much do you want to be a partisan vs general news source. A partisan news site doesn’t need to worry about readers as a check, becuase of what their goals are.
There is a big difference vs email spam. Email spam is mostly not wanted by the consumer. Much of what is being called “fake news” is simply ideological news that “might be exaggerated, but is still sort of true”. The first list put out by a liberal professor includes: drudgereport,breitbart, zerohedge and even redstate, but doesn’t include huffingtonpost, dailykos, motherjones, thinkprogress. This is an unbelievable difference that lends to a further backlash if the tech intelligentsia tries to keep up this fake news issue. Obviously ban the fake macedonian news sites, but that is not where most people are really focusing on and why i think this “fake news” crisis is further misunderstanding by many (not all) liberals and elites adn techies.
If Drudge and zero hedge are fake news then count me in as a fake news aficionado
there is a big difference between spam and unwanted email. same with fake news and news that has a slant.
how so? if i am sent an email from someone who i did not give authority to email me and/or the email is harassing me with offers or shady offers how is that not unwanted? The clear difference with news is you are intentionally either going to google, facebook, twitter, or another information site, so its not entirely unwanted. If i get an unauthorized email that i didn’t consent to(more than once, after say i opted out) that is definitely spam. no?
spam:unwanted email = stuff i will never want: stuff I mostly do not want. That is why consumers have the option of tagging a sender as”spam”fake news: news with slant = FALSEHOOD: Different shade of truth. The former is outright false and really should not be in circulation.
yeah but a good good majority is simply different shade of truth, and i am highly skeptical any mainstream institution will be fair .. hence the backlash of this election and comeupance for the elite. They have earned scorn. With spam i get to choose, most solutions here seem to say someone else should decide for me. unlikely for the common folk to like that
Yeah, fair point. But.. the least we can do is find a way to identify and shut down stuff that is outright false (like the macedonian stuff).The irony is this is exactly what good journalism was supposed to solve: Seeking the truth and reporting the news as accurately as possible. With the fragmentation and politicization of media, and everyone becoming a news generator and publisher, people have lost trust in a lot of things. Weirdly enough, the pendulum has swung the other way and they have begun to trust the really untrustworthy sources.
yep agreed. total loss of faith in instituions. http://www.gallup.com/poll/…scary times in many ways, but only if we don’t use this moment to reform those institutions so that faith is restored. funny thing is those in power of those institutions don’t want to chagne… (obviously sarcasm, most people don’t want to change)
How did we get here that institutions have becomed this trashed
So, by your logic, we should ban every single major media outlet that has documented ties to the DNC?
..like the wsj?
Hey, I think the WSJ is barely even centrist anymore. So, yeah, shut them down too.
Interestingly just saw Ron Paul’s list of “real fake news” journalists and news outlets sourced from Wikileaks as a rebuttal to the liberal professor you mention.http://www.ronpaullibertyre…
That exaggeration is a form of lyingIt would be considered perjury in court.So why not with news.
this problem is a lot harder than email spam. almost no one forwards email spam, the read time is small, and the clickthrough rate is small, and the sender reliability is consistent. the same is not true for fake news. in other words, the deviance between the predictive variables in fake news and real news is much smaller than it is between fake email and real email, which makes the problem much more challenging.the real issue is that people en masse do not have much appetite for truthseeking. if they did 9/11 being an inside job would obviously be an unending story. instead it is generally quietly accepted and forgotten or still not accepted. if we’re looking for a machine learning algorithm to learn from human behavior, first we need to improve human behavior.9/11 was an inside job,kid mercury
Following on from yesterday’s post.PSTP – Proof of Story Transmission Protocol, as a blockchain layer in the news media and publishing stack.
i think a refined hash cash would be an elegant solution.make bad behaviour expensive.fake news, aka lies, being a relatively centralised problem, makes this approach far easier than if it was happening through an open protocol.detailed meta-data is a proxy for effort which is a proxy for expense. why not just cut straight to the chase and make it expensive. whether financially, computationally, reputationally etc)( microtransactions? tokenised crypto approach? proof of burn/stake?)
“make bad behavior expensive”++
This is the obvious “some are more equal than others” concept, from the Jarvis post:Give trusted media sources and fact-checking agencies a path to report their findingsThese trusted media sources are the usual suspects or probably a new media source started by a legacy member of the usual suspects.
Trust is a hard thing to quantify. I don’t know of any effective solutions.
Oh it’s crystal clear who they mean by “trusted media sources”. The gilded 400, just like Mrs. Astor used to do.Mrs. Astor presided over thousands of parties at her opulent mansions on Fifth Avenue, only open to those she deemed worthy (later known as “The 400”)https://gotham-magazine.com…
I have another suggestion, that ‘trumps’ (cough) all the others. It’s called: EDUCATION.
Fred, I think the best solution is one that’s in your portfolio — Stack Overflow. Spammy and fake-expert sites proliferate for trying to “help” programmers and end users, but really trying to run ads. I don’t know that Joel’s approach can be generalized up to news, but it seems to me worth an attempt.
yeah, stack is great for getting to the right answer
What’s cool about Stack is (a) it’s entirely social — would-be spammers aren’t in an arms race with an algorithm, they’re interacting with a community, and (b) it’s entirely transparent — all the outcomes can be traced back to their inputs. I’m not saying that Stack Overflow as-is could be extended to manage news, but that the people behind Stack might be good people to tackle the fake-news problem, because (I think) it’s a similar problem, unlike message spam, where you can focus detection on the sender as much as on the content.
Maybe this situation will help large serious news publishers gain back subscriptions and real trust from masses that left for free internet news sites. This topic could shake internet principles to its core, and maybe make us consider human beings and their reasoning as best option instead of putting AI and technology to solve morality, truthfulness, emotions and trust.
Can we just form the ministry of Truth already?
Interviews are been held as we speak.. (<– this could be fake news)
I wrote this yesterday in the ‘Proof of Stake’ post comments about Blockchain.”Truth” defined by mathematicians is different from truth defined by artists. What happens when we put Magritte upside down? Is it still a pipe or a dream? https://uploads.disquscdn.c…What was the “truth” during Election 2016? https://uploads.disquscdn.c…How did Descartes and Socrates define validity? https://uploads.disquscdn.c…What are the mechanisms of mining and hashing? @liad:disqusThey’re Cartesian and Bayesian (so pre-biased in their ways). https://uploads.disquscdn.c…Clearly, mining (of and in itself) is not an effective process for validating “truth”.Meanwhile, on the “training the data and AI” side of validating content … Also Cartesian and Bayesian … https://uploads.disquscdn.c…
as we newspeak
more than ever, no news is good news
Only doubleplusgood funful comments please.
Haha, Orwellian and Harry Pottery at the same time.https://uploads.disquscdn.c…
hmm, so much for the first amendment.
But it would be good if users could know the creator of a post has been online for only three hours with 35 followers or if this is a site with a known brand and proven track record. Twitter verifies users. We ask whether Twitter, Facebook, Google, et al could consider means to verify sources as well so users know the Denver Post is well-established while the Denver Guardian was just establishedThis makes sense. But it is censorship done a different way. And it’s trivial to get around judging by the history of how other systems have been gamed.We urge the platforms, all of them, to more prominently display media brands so users can know and judge the source — for good or bad — when they read and share. Obviously, this also helps the publishers as they struggle to be recognized online.More power and help for the incumbents. And who are these ‘media brands’. Brands you know. Brands they trust. The established players.For example, one of us saw an almost-all-blue map with 225K likes that was being passed around as evidence that millennials voted for Clinton when, in fact, at its origin the map was labeled as the results of a single, liberal site’s small online poll. It would not be difficult for any platform to find all instances of that graphic and pinpoint where it began”Would not be difficult”. The problem is that is one example which could be flagged but someone doing this is going to outsmart any system that is put in place I can think of ways right off the top of how I could do that.And what’s with this snotty “single, liberal site’s small online poll”. How hard would it be to present the poll as a larger poll or with other more authoritative info?  What are we going to have next, audits down to barebones levels? Find some base material that can be posted to a university .edu site with what seems to be a highly respected member of that community. Only has to be true for a short period of time once it’s uncovered the damage is done.
The bigger issue that isn’t be addressed is that people don’t care if the news if fake or not. That is a deeper and systemic issue I think. Untill that get’s addressed any technological solution to fake news will be futile.
When you go to a broadway show (Hamilton) or to a concert (Kanye) and it turns into political theater that contributes to the noise and people shutting down, irrespective if it plays to one’s sensibilities. There’s no relief or safe havens from the noise, it’s non-stop and ubiquitous.
Yes, how dare they turn Hamilton into “political theater”. Smh
Ha, got me there. But why not let the show speak for itself. No add’l editorial needed.
Unique situation. Mike Pence has spent his political career passing laws which discriminate against many actors in that cast. He has announced his intent to do the same on a national level. By going to a Broadway show he now has to encounter some of the people his policies affect in real life. Sorry if that is uncomfortable for him. To him (and maybe to you) that’s “politics”. To them it’s “life”. So when is a convenient time for them to turn off their humanity for everyone’s comfort? They delivered a confrontational but respectful and hopeful message, encouraged people not to boo, and Pence took it gracefully. President Baby-man Thin-skin was the one whining about an apology and a “safe space” which was laughable.
Although I share your political sensibilities, I do believe boundaries are important. I could care less if Mike Pence was uncomfortable or not. That’s not my beef. Let the art form–implicitly or explicitly–deliver the message. That’s what I’m paying for, not to be the recipient of extraneous editorializing, again irrespective if it does or doesn’t appeal to one’s sensibilities. #time and place.
I appreciate your perspective, but: Artists Deliver Messages. The piece (the form, the writing, the casting, etc) do make a number of strong points. But the humanity of the actors is another issue altogether. We can’t assume that Pence would appreciate the irony of him quietly enjoying the performance of an HIV-positive lead actor, while he has literally voted to divert funds from treating the disease into programs that *electroshock the people that have it* (not even an exaggeration, unfortunately). So perhaps that personal appeal made the point to him. Hamilton has a unique position right now, as does Pence. If they were truly “harrassing” or disrespectful I would not be saying this. Just my 2c. Reasonable people such as yourself may disagree.
Anyway… OP was about “fake news” so we are a bit off-topic. Cheers…and respect.p.s. If anything I have have said could be construed as saying anything good about Kanye West, I withdraw the comment.
The politeness of the society you describe is gone. Fear is the status quo and until that is gone nothing is normal.
I’m hopeful fear is a transitionary state. If not, we’re in a shitload of trouble. I’m hopeful there’s enough checks and balances so common sense will prevail. Political rhetoric is fine for campaigning, while political reality is far more nuanced.
Unfortunately it is growing stronger not less.And it will continue as the causes for it become more real.I hope you are right. I believe you are not.
Look at these nasty elections: http://www.neatorama.com/20…
I guess Phil but this one is the one I experienced and it has changed my world.Still unnerved a bit honestly.
I agree. Writing a play entitled ‘Dense’ for the stage to make a statement to audiences and the wider world about the actions of a politician is the way forward.
You mean the same actors that replied to a “whites need not apply” casting call? The same actor who glorified date rape of white women?
Wait, what?Also, casting calls also sometimes call for specifically white people. They call for all sorts of crazy things in order to fill out a character.I have a friend who is a classically trained actor. He mostly gets callbacks for cops, because he looks like a cop. It’s not a reflection of his acting skills, just the way he looks
Yes… and hiring based on race is ILLEGAL.
Yes, as long as it’s unrelated to qualifications. And in acting, looks matter.It would be incredibly weird to put an Asian man in a role where the character is specified as black.And while Hollywood has a race problem, in a lot of ways that problem is a reflection of ours. (Mostly) physically blind casting in terms of race won’t work, since roles are cast as a reflection of our own stereotypes.
Please tell me more about how Alexander Hamilton was black…
I do not agree. That is their right to free speech. You can choose to not see the show if you do not like the editorialization.When people feel like their way of life is being threatened by the prospect of a new government considering invasive policies that affect specific groups of people, they should express these concerns openly without fear and in forums that shall provide them maximum reach.
Your reply suggests a theater goer is aware they’ll be a post-performance diatribe, and that they have an upfront choice to attend or not attend as a result. Not the case here. This was a one off. I’m not diminishing the importance of the cast’s message, but too often these days the lines between performance and politics are blurred, whether it’s Kanye West, Pearl Jam, Roger Waters, etc. Although I’m respectful of a performer’s beliefs, that’s not why I buy a ticket. I’m there to be entertained, while often the art form itself can serve as a strong form of expression, as is the case w/ Hamilton. Not everything needs to be politicized at every pass, although admittedly it’s not often one has an audience w/ the next VPOTUS. I frankly thought Pence erred by not responding to the cast and audience. Missed opportunity, though I disagree w/ the initial tactic.
True and I agree with what you are saying… But, assuming these are more the exceptions than the general rule, we need to be welcoming of artists choosing to exercise their first amendment rights. I believe this specific instance was justified given the acts of Mike Pence as Indiana governor against LGBT and the general concern in the country around what the Trump govt. may do.
The first amendment does not give you the right to ambush people and beat them over the head with your soap box.
yes, it does, as long as it is not violent and not disruptive of some one else’s rights.In this case, people who feel offended always have the choice to not attend the play or support the future endeavors of the Hamilton crew.
Sure one man’s bias is another man’s propaganda but that slug-fest only plays out among the folks that are engaged enough to have a strong point of view.For a large percentage of the citizenry political debate is just background noise that they pay scant attention to and only engage with it through osmotic background absorption.It is this osmotic-absorption-crowd that is being ever more targeted by the media ecology opportunists over the last two decade or so.These disingenuous “media ecology opportunists” have now elevated this discordant tactic to a formal “media ecology terrorist attack” on the epistemological decency of the public debate space.The long standing bias vs propaganda slug-fest is simply an expression of the underlying nature of human subjective experience/perspective and needs directed delusion/wishful-thinging that simple well up from the underlying causal necessities of human biology.These new “media ecology terrorist” are a far different animal.They consciously and formally play on the subliminal nature of the osmotic-absorption-crowd. They barrage the public epistemological space with unrestrained nonsense and clearly transparent misinformation that stirs up conflict/derision/conflict. This circus/spectacle tactic is very effective at crowding out all other pertinent public/media attention share.They don’t care that their message is transparently untrue, divisive or socially destructive. From their tactical perspective such sensibilities are for the chump loser crowd. Winning is everything and the ends justifies the means.These epistemological terrorist care only that their intended subliminal emotional/conceptual memes thick subconsciously, even when consciously perceived as untrue, effectively blindsiding the majority “osmotic-absorption-crowd” which in fact includes all of us to varying degrees.So what you say, they are just being more clever than their opponents at shaping public perceptions/beliefs/knee-jerks.The truth is that they are now permanently poisoning the well of public epistemological debate/triangulation. They are forcing all parties down to this lowest common denominator. All stockholder are now forced to play this epistemological-terrorist-defector, take no prisoners, tactic or simple lose the subliminal mind share. This pushes our already near impossible shared-reality evaluation debate, so curial to are collective prediction/control/survival, into the realm of hopeless.As an organismic analogue, just because this is my fanatical preoccupation, consider the effect on your own personal powers of prediction/control/survival if you were to allow such epistemological terrorist activities to take hold of your personal epistemological mindset ?I do agree with Fred that new platform-metadata thuthiness tools are monumentally important. I also believe it is equally important that basic epistemological/media-ecology life-skills be taught as part of the core high school curriculum. These approaches are both fundamentally necessary to combat the rising tide of epistemological-defector opportunists that threaten a hostile sabotage of the presently gathering momentum towards a networked-information driven collaborative realism.
I agree and disagree. I think most people don’t want blatantly completely fake news. But then the question becomes what is fake? And that’s where I agree. Where does bias end and fake start? That’s a line that really depends on how extreme your views are. In addition there are some things that can be proved and some that can’t. There were some really nasty personal attacks that you couldn’t prove one way or the other. Whether you think the were deserved and if they were fake really depends on your viewpoint.
The bigger issue than that “bigger issue” is that there appears to be sufficient technology already existing to help in the fake news arena … but, an insufficiency of will or fortitude or caring by technology leaders who already have the technical horsepower to handle fake news (and ALL information flow) … (whether good information or bad information)If technology leaders can stop innocent people from adding too many connections on Linkedin or too many followers on Twitter or too many friends on Facebook, etc., it stands to reason that that same defensive, protective mentality can be applied to defending against the delivery of fake news…And, too, if Linkedin or Twitter or Facebook can stop me from posting redundant posts, that’s sufficient indication that the technology is already here to help control fake news and/or other harmful information.My redundant posts can’t be deemed redundant without having been filtered|read|scanned by the platform.However inelegant the above may be, a good leading with a good understanding of technology, *could* help filter out fake news…In conclusion: We couldn’t wait to get 100% consensus from American public on toothpaste content(s), immunization shots, tobacco consumption, alcohol (the most useless thing EVER conceived for people with finite lifespans lol 🙂 )In brief: there are endless things humans consume and don’t care about doing harm to them, their bodies, their families, their neighbors.As on any significant issue, SOMEONE has to lead like mature leaders (mature leaders care about things non-leaders may not yet feel…)
This is sadly true.Bothers me immensely that we’ve stopped caring about truth as a society
Give trusted media sources and fact-checking agencies a path to report their findings so that Facebook and other social platforms can surface this information to users when they read these items and — more importantly — as they consider sharing themSeems like the profitable whales that benefit from the work product should be paying or offsetting the cost of this, not the cash poor “trusted media sources”.  That said once again simply creates an ecosystem of existing and anointed power. ‘To be sure’ I don’t actually have a particular problem with that but will point it out to be fair (and balance) After all they are the losing team.
Funny how nobody flagged this as a problem, or gave it much thought, until after the election. Hard to believe it didn’t exist prior to that. Wasn’t even considered in the run up days either even though it was there (back when confidence was high). Now all of the sudden it’s a national emergency (to toss in some hyperbole)
The plebs are thinking on their own…black voters, Hispanics and white women didn’t do as they were told.
That is too dramatic. Pop vote known, I think it was bad sampling and a lousy job on the swing states, by the pollsters.
I didn’t reference polling at all
Sorry, I am getting rusty.
The Pop vote is irrelevant. The DNC will always win the popular vote because the GOP doesn’t even bother to campaign in CA anymore since they have zero chance to get any of their electoral college votes.That’s why the Electoral College exists. So that a few heavily biassed states don’t swing the entire elections and disenfranchise the majority of other states where votes actually matter.
I am not questioning the outcome or the federated electoral system. The point I am trying to make is that the reason the pollsters failed predicting the outcome maybe rooted in that they were relying on algorithms too sensitive to popular vote and less sensitive to location-based or state sampling.High frequency signals vs low frequency, as we were discussing with Twain.
Well, they were sampling registered democrats +9 / +10 completely ignoring the fact that the primary results showed a complete lack of enthusiasm for Hillary amongst the DNC base. In fact, if you take California and the numerous proven illegal votes out of the equation, Trump won in an utter landslide in both the popular and electoral vote. PA, MI, and WI haven’t gone GOP since Reagan. That should tell you just how badly the DNC lost the working class.
What proven illegal votes
Oh, I don’t know… How about the fact that California gives driver’s licenses to illegals and then registers them to vote?
You forgot Jews.
You know what’s scary? Consider the lack of attention to the plight of some of those less fortunate (including the white democrat UPS union guy who voted for Trump) if he had lost. They seem to only care because of a loss. But it would have mattered even if he lost by the amount that the polls were showing. Especially because of all of the outward flaws.
I’ve been thinking about information representativeness for a while. Mostly because when I set out to make a system so that SIGNAL > noise, the flaws of content farms, blacklists, metadata, weightings and counts in algorithms etc were things that appeared on the radars as “must-solves”.Regardless of Election 2016’s outcomes, the SIGNAL > noise problem is one I would have focussed on.Now, that is not to say this:* SIGNAL = Democratic content* noise = Republican contentIt is to say that the solution has to be able to filter in and dial up the SIGNALS agnostic of the political affiliations of the user.
Squelch. I was trying to explain this to my wife who never used either a CB Radio or a Marine or Aviation radio (old style type). You have this dial which you adjust so you don’t get the background noise and only a clear broadcast.Anyway the definition of squelch would leave someone typically confused whereas actual usage it is super easy:a circuit that suppresses the output of a radio receiver if the signal strength falls below a certain level.That doesn’t do the trick. Leaves people confused.https://youtu.be/AuoBQlSgNF…
CNN collaborated with a candidate and told us it was illegal to read Wikileaks. What’s the meta-data look like on that?To me this whole “controversy” sounds like urban echo chambers lashing out after realizing they’ve finally lost control. Reminds me of Dan Rather(biased) lamenting the rise of Drudge.
See my comment above where drudge is considered fake news by the liberal professor who has gained traction on “naming fake news”.. i personally think drudge is biased, but not huffpost? please..
I have no problem with bias. I have a problem with propaganda.That liberal professor sounds like he found the right occupation
I have no problem with bias. I have a problem with propaganda.”Propaganda – noun – information, especially of a biased or misleading nature, used to promote a political cause or point of view”Please explain the difference between The Drudge Report and propaganda.
I should have said “government propaganda”.
Sooo…. Breitbart then..?
Sooo… What makes Breitbart different from MSNBC and CNN, both of who have documented ties to the DNC in their editorial process?
Oh… I dunno. I guess there is a difference between “documented ties” and “The head of Breitbart is now chief strategist and senior council to the president.”
Yes… One clearly discloses their ties, the other fakes impartiality until WikiLeaks calls them on their BS.
As long as you are cool with it.
Hey, if MSNBC and CNN came out and admitted they were Clinton mouthpieces, perhaps people would believe a word that came out of their mouths.
Prove this statement please. This is a very out there statement
Go read WikiLeaks…
Be more specific please.
Also if you are that curious, I can explain the differences in the editorial process
One supports your echo chamber and the other does not. There, I simpled it up for you.
I love the idea that it is spam, but I think I like your idea that bias is fine….if it is transparent…..even more.I bet someone builds a filter – maybe FB maybe a 3P – that provides the bias metadata.Like when an analyst is on a Financial network and they list his personal, family, & other holdings.
That urban echo chamber is over 50% of the US population and growingHow should the people who live in urban areas feel being underrepresented politically?
They aren’t underrepresented. They are just predictable lately.
To a lot of people it does not matter if the news is fake or not. There were plenty of stories this election cycle that were quickly dismissed as fake or not entirely accurate, but in today’s (to use the hot buzzword) “post-truth” media environment it didn’t matter whether or not the news was accurate, the impact of the initial story had already occurred. So while in the long run a system like this might be helpful in identifying the problem news sources from the truthful ones, it doesn’t entirely address the problem that to a lot of people what matters most is not if the news is correct, but rather who shouts the news quickest and loudest.
Imagine the fake things that might happen in an open sourced blockchain network. I realize there is “proof of work”, but good hacker’s might be able to perpetuate a fraud long enough to make some real money and do some real damage.
Exactly, Jeff. That’s why we have to “wrestle the octopuses” and arrive at some democratically representative definition of values.And, given the borderless nature of the Internet, those values definitions have to be universally comprehensive and coherent.
I fear that this one is much harder than spam: almost no one wants email spam, and lots of people want fake news that confirms their biases. So while it is pretty solveable from a tech perspective, I don’t see how any of those solutions work as a product.
i think there is a difference between fake news and biased news. all news is biased to be honest. but outright fake news is different.
Blockchain the hell out of it? A ledger for who said what and when. Going alllll the way back to the actual event.
Cesspool. “Fake News” is the new “Sensational” journalism on steroids. Same game, more hyperbole.
On Saturday I spent the day at DAT2016. Mostly academic papers, all floating around the topic of ethics in tracking and algorithms. On Sunday, the first of my evangelical Christian friends on Facebook posted the seasonal trope of how Happy Holidays is politically correct and she’s just going to say Merry Christmas, so there but please share. That same day I read about the role of Cambridge Analytica in the Trump campaign. If you Google them you’ll see there is disagreement over how effective their psychographic profiling really is. Apparently data scientists are prone to exaggerating the accuracy of their algorithms. Who knew? But, there is an element of targeting that is real and important to understand. I experienced it this way. I decided to click on several of the most emphatic shares of these of my friends. I wasn’t thinking algorithms. Just curious to see what they were reading. In less then 24 hours, yes, less than 24 hours my news feed and the trending topics to the right had changed. It wasn’t to fake news. More like a collection of partial truths that roll up to a false or at least very slanted impression. Now back to Saturday. We all more or less agreed that we don’t have useful ethical frameworks to apply against such rapidly growing capacity. We don’t even have a very good language with which to discuss it without resorting to political and other biases. Here’s what sticks in my mind. Very soon, the drumbeat to rid us of the Consumer Financial Protection Bureau will build. Already been out there since its inception. The messaging will be CFPB is hurting [fill in the blank of whatever hot button topic for you]. If anything we need a bigger budget. Use it to hire the very best data scientists and a few ethicists who understand technology enough to not be all anti-innovation. This is serious stuff. My gut is there are very few people who comment in this feed who stand to be hurt in any way but the vast majority of voters are at the short end of this stick. You can bet that Cambridge Analytica doesn’t make enough money from the Trump or Brexit campaigns to get good returns for investors. That profiling stuff is being sold to insurance companies, credit bureaus, and more. It will cause very much damage to the least of us. We owe it to them to be thoughtful about all this.
They need brand partner
The big issue being “Open” – the web can’t do all the spam work for Facebook, if Facebook won’t open itself up enough to be a positive actor in the ecosystem. Let’s hope Zuck’s apparent change of heart on Friday means movement.
great point John
At the end of the day, much of the responsibility is on the users.Having a good government depends on having good citizens. Good citizens read a range of reporting and opinion and make a well-informed and considered choice at the ballot box.Good social media users read a range of reporting and opinion, and curate who they follow over time for signal over noise. They act as a filter and reshare the things most worth resharing.Being a good social media user has much in common with being a good citizen in a democracy, and in both cases, the system ends up being as good or as bad as the people comprising it.
A metadata scheme which attempts to quantify “accuracy” (let’s stop using the word “truth”) in a value-neutral way is well-needed, and feasible. Wikipedia provides an example.This should be able to apply to any information presented on the web and should evaluate the confirmability, reputation, and “self-interest” of the source. These do not need to be pejorative. A company’s website could be described as “self-interested” tho still perceived as accurate.The real issue isn’t what is described as “fake news” per se – there were a shockingly small group of idiots with stunted senses of humor who thought they were making some satiric statement by literally making up news and showing people how gullible they were. They could probably be managed.The bigger issue is some scheme to tag “real news” sources with a high degree of bias and self-interest. The play would of course move to gaming these tagging schemes but great minds would need to take this into account as well.
exactly. it took me about forty comments to find one that understood what i was trying to say.
Fred – I am humbled, as I respect your posts tremendously. – Eric Chaikin.Of course – here’s a start at it, hacked over the weekend 😉 http://mashable.com/2016/11…
It is relatively simple to produce a probabilistic measure of accuracy. The primary difference between inaccurate information and spam is that no one save the most gullible need be told that Nigerian fella isn’t really going to give you 15 million dollars. Otherwise it is nigh on impossible to convince a man that something he desperately wishes to be true is false. This is why we end up suffering through various “if there’s smoke there’s fire” arguments whenever some muckraking character assassination is debunked. Put simply, your scheme is doomed to fail because the average American is a credulous moron.
Whether the smoker chooses to heed the Cancer warning, it is better for all that it is there.
I’m not certain that is true. The cancer warning had a short shelf-life before consumers became fatigued. Authorities escalated with graphic pictures of diseased tissues and dying patients which managed to shock for all of about 5 minutes. Now the worldwide trend is towards plain packaging.And yet, “you’ve got to die of something, don’t ya?”.What motivates people to give away cigarettes is an excise that makes them prohibitively expensive. In that sense I prefer @liad:disqus ‘s strategy.
Great comment. Do you think it could be done with a combination of crowd sourced intelligence and native analysis ? The operative term here may be “value neutral”. Even if the service can provide indicative signals on the accuracy of any news item, it will be a better situation than status quo where the reader is pretty much left to his own to figure this out.
Facebook is scared of looking like it is censoring people, but I think there is a very simples solution that can do a lot to help the information.I believe that most people are nice, smart, do not want to spread false information but end up sharing because they see someone they trust sharing and they just click on share.Whenever the Facebook algorithm suspects that something is false (or knows that it is an old article making the rounds again) it could just warn whoever is sharing about this and let them decide.I’m pretty sure that this simples solution could filter out a significant portion of fake news spreading.
Vitor Conceicao:The people who helped DJT get into the WH intentionally lied and reposted fake news knowingly and had a major cable channel helping in pushing it.This was done constantly with at least two regular contributors on this blog.We are not referring to the satirical posts but the intentional reposts.There will be those we will attempt to confuse what actually happened.Just ridiculous to act like this never occurred.
I’m from Brazil, where we have a very similar problem here, the biggest difference is that while in the US the fake news machine is in the hands of the right, here its in the hands of the left.Of course there are people who spread fake news intentionally. But they are only successful because naive people end up helping them sharing this same information by sharing the same news without knowing it is fake.But of course this all comes down to your world view, I’m more into the nurturing father world view than the strict father one.
Vitor Conceicao:We send our love to Brazil and welcome all the Senhorita in Brazil.Please export the beautiful Senhorita. Our Voce is a little weak.Thanks in advance.
What we’re seeing is the exposure of the lie that news was ever really about ‘truth’ at all. It was always about selling a vested interest, and now we simply see that reality more clearly than ever before.It’s not the first time this has happened. Earth is not flat, not 6,000 years old, not the centre of things…
jason wright:”Earth is not flat, not 6,000 years old, not the centre (sic) of things”Better let VPOTUS Elect know.Instead of addressing the assine positions against science and facts the supporters just deflect. The usual that has actually worked.
Your job I’m afraid. He’s not my VPrez (sic) 🙂
Jason wright:In other fake news. Keep pumping it. The weak and gullible will continue to drink from that empty well. After eight years the country will want the other side to clean up that mess of deficits and high interest rates that favors those lending the money (Banks). And let’s include a corrupt earmark to top it off.A bunch of corrupt phonies.Appointments of old relics of Washington and Wall Street donors. The sham.BTW: Pence is the VPOTUS Elect of all US citizens like him or not.
So we need a form of curated truth?
CONTRIBUTORS:FOR THE LOVE OF GOD!When we the Independents challenged the lies, misinformation, fake news reposts, etc., where were all you Elites who were much above the crap, defecation, discharge, dung, excrement, excretion, fecal matter and feces? Hiding closing your mouth, ears and eyes.Just as Zuckerburg attempted to do.You are in denial regarding the bigotry, racism, misogyny and xenophobia that many (more than you care to admit) foster more openly now. Kumbaya my lord will not work. While you are singing they are continuing with the appointments and nominations to let the rank and file know they have arrived.Really pathetic.We anticipate the Progressive attacks but are to weak to address the Elephant in the room.Should have addressed this before the election. They don’t care what you do now. The damage is done. Thanks Progressive Elites. Head in sand.Keep shimming.https://youtu.be/zXW0U-yGySM
http://www.chicagotribune.c… Some college students hacked together an open sourced solution. I don’t know anything about it. Just read about it in the paper.
I remember managing spam as annoying but not over-burdensome (AOL -> yahoo -> Gmail). Perhaps this was manageable because the universe of people sending emails, and therefore spam emails, was smaller. With news, you’ve been able to research what sources are biased in what ways and are accurate. Here is a example. http://www.pewresearch.org/… .One can adjust social media feeds to reflect news from sources that you regard as accurate is possible. Treating fake news as spam does not seem to be a serious issue outside of the significant resources it takes to deal with spam.
As an intermediary step, I think social networks should help users contextualize the media/news they’re consuming online. Today each article exists in a vacuum and users are sharing without appreciating the author or media outlet’s slant. Exposing network structure may help users understand the type of news they’re reading. (But note that this is more of a DIY approach, as opposed to automatically filtering out fake news).How would this work? Information transmitted through social networks (and even Google, through hyperlinks), is connected to other information that’s published on the site. If I’m a writer for a fake news organization, my article may be clustered among other fake news organizations (depending on how it was written). If the social network could expose this network structure to users, they may be able to parse out for themselves what news/reporters/outlets are less trustworthy.However, how a network is structured depends on the platform on which news is published and the metadata that exists — so this is more of a high-level concept than an implementable idea.
People looking potential solutions should look at Reddit. They don’t ban stories that turn out to have clear misinformation, but they label them and the community takes care of the rest. It works well and the community knows they’re still in control, no direct censorship at all.
It’s scary to think that Facebook accounts might one day become the primary source material for the writing of 21C popular history.
Here’s a riddle: What happens when the “news” is so polluted with bias and half-truths and agendas and echo-chambers that eventually it is no longer viewed by humans as objective information?Answer: It largely becomes irrelevant. People stop paying attention. And yes, they throw out the baby with the bathwater because they can’t tell which is which. Noise and nonsense is exhausting. They wear you down until you opt-out altogether and move on to better things. Imagine if 99% of your filtered inbox was junk mail… You’d stop checking email.Real Answer:The medium changes completely. Eventually it’s not about articles and blogs and TV news. People will evolve to actually think for themselves. They stop expecting objective information to be created by impartial third parties. They realize that most issues are very complex. They wake up and decide to stop being pawns in an agenda-game run by other people. And instead they proactively go out into the world and seek truth with a sense of fairness and respect and compassion. They invest the time to become educated on the few issues that matter to them. And they pridefully say “I don’t know” about other issues. They realize that there are no shortcuts to being informed.Opportunity:Soneone will eventually build a platform that promotes people “opting-out” and opting-in to responsible self-learning. And then we’ll have a path to the future. #noechochamber.Aside:There is a great REI campaign called #optoutside which is about being outdoors on black Friday and opting out of the retail madness. It’s a backlash against consumerism. There are backlashes everywhere. And some of them permanently change human behavior… slowly and then all at once. Agenda driven news is so ripe for a backlash it’s unreal.
Wasn’t fixing email spam easier than the fake-news prolem? DKIM and SPF deal with the fixed-format packet header, but fake news is unstructured.
What is to stop someone from hacking it to create more fake news
Fred. Simple elegant solution. Who will make it happen?
Not sure if anyone is still following this thread, but if there are folks who are interested in comparing editorials and opinion articles across newspapers, I put this site together over the past few days: http://opposingviews.heroku…I’m running the articles from 10 of the most popular newspapers through IBM’s sentiment analysis tool. Please do reach out if you have any feature requests.
Facebook and Twitter have weaponized the citizenry and we no longer consume news, we use it to fight for our “team”.You can have my fake news when you pry it from my cold dead phone.
Agree with your first point in your first comment.Agree 50% on faulting FB and Twitter.The remaining comes from “real” news sources. “Real” news these days looks more like entertainment than journalism. Big part of the problem comes from that. But I am not sure what the solution is here.
Well yes, the financial incentives are incredibly misaligned for anything related to “journalism”.
Resulting in two guys in an apartment like this, spreading fake news.https://www.washingtonpost….
I’m in awe that people can feel good about their lives doing that.