Bluetooth Everywhere
We checked into a hotel today and next to the desk in our room was a block of USB ports to charge our phones with and a Bluetooth button. I pushed the button and my phone paired with the room. Now I can play music on the sound system in the room with my phone.
There is nothing special about that really. We all do the same thing with our cars and headphones regularly now. But when it works as seamlessly as it did for me today, that is nice.
The truth is that Bluetooth is everywhere these days. And the pairing thing, which used to be such a hassle, seems to get easier and easier every day.
As I’ve written before, the power of non-proprietary protocols like Bluetooth is pretty impressive to see in action. The more they get adopted, the more they get used, and the more they get used, the more they get adopted. It’s one big virtuous adoption cycle and the beneficiary is us.
Comments (Archived):
I just ordered George Gilders book on Google/Blockchain and was wondering if this is accepted wisdom? The latter of course.
Instant pairing needs to happen. It is really annoying to have to search and enable visibility, etc.. and nfc not that widely adopted
The problem with Bluetooth: walk too far away from the speaker, the connection dies, and no more music.Would be great to see the adoption of a non-proprietary streaming protocol. Most streaming speakers support either chromecast or airplay- but not both. That kills a lot of use cases.
The problem with bluetooth – it’s needlessly complicated, and advanced features never work unless there’s a supporting standard. Apple did this with their MFi program which effectively requires vendors support two protocols (BT proper and MFi). Google didn’t engage and as a result BT-LE tags never really work with Android. Check reviews for any bluetooth tag on Amazon or its app Google Play and you’ll see hundreds of pissed off customers. Tags on Android just do not work.https://developer.apple.com…
I still do not trust Bluetooth pairing and as a result I seldom you use it. When there is a 10% or more failure rate in an experience, one does not end up trusting the technology. Quality control these days seems to be losing ground to pushing out product. I check in every now and then to see how this technology is doing but as it stands right now, I am skeptical at best that it will pair properly. Could just be an iPhone thing though..
Picture of the button?
Hotels are target-rich environments for hackers and their networks are notoriously soft targets.Isn’t this risky from a security perspective?
Yes, it is risky. There have been a couple of major bluetooth vulnerabilities published in the last 2 years. That’s not a lot for a wide-spread protocol, but you can bet cash money that there are unpublished vulnerabilities for bluetooth in the wild.To be fair, plugging your phone into the USB ports at the hotel or connecting to free wifi at random coffee shops is also risky. Security is always a risk vs. rewards trade-off. You have to decide if what you gain is worth the risk.With USB, you can use a power-only USB cable to eliminate all of the risk except power attacks designed to fry your electronic equipment. For random wifi hotspots, you can use a VPN to eliminate most of the risk from snooping, etc.
and you can bet that there are unpublished vulnerabilities for it in the wild available for cash money, if the NSA’s budget gets squeezed.
Absolutely. There are companies built around the “zero-day vulns for sale” business model.
Solution: There is a multi-dimensional, distribution-free, zero day anomaly detector with false alarm rate adjustable and known exactly in advance. It’s some good, original applied math.Comparison with AI: From all I can see, it makes current approaches via artificial intelligence (AI), machine learning, and data science look at best like baby drool.Origin. The work was done in an AI project at IBM’s Watson lab and published in the Elsevier journal Information Sciences; the goal of the work was to improve on AI for such detection.Statistical Power. In the assumed real context, there is too little data to achieve the Neyman-Pearson most powerful result, but in still an important sense the work is the most powerful possible. .Computations. The work needs a fast algorithm, and there is one building on k-D trees but not in the paper.Statistical Context. The “multi-dimensional, distribution-free” part is solidly in the context of statistical hypothesis tests and, thus, has some solid guarantees and is not merely some heuristics or empirical curve fitting, e.g., as for AI.
Having designed and built network security products for many years, I have to ask if you’d be interested in a bridge I happen to have for sale 🙂
Uh, I’m sole author of the work and the paper.The paper is solid applied math, all theorems and proofs in applied probability based on measure theory, measure preserving from ergodic theory, an algebraic group, Fubini’s theorem, etc. The most powerful part is another use of Fubiini’s theorem and not in the paper.The paper passed peer-review and was accepted for publication with no revisions, right away. Apparently the editor and chief of the journal, a full prof at Duke, managed the review process himself.While I intended the paper to be accessible to practitioners, some of the math proved too difficult for a surprisingly large fraction of the academic computer science community: From several journals where I made just an informal submission to gauge interest, I got back from the editors in chief approximately or exactlyNeither I nor anyone on my board of editors has the mathematical prerequisites to review your paper.For a prof at MIT and editor in chief of one of the best computer science journals, I wrote tutorials for two weeks before he gave up.All of my favorite professional associates would find the math in the paper curious and novel and a bit intricate but otherwise routine reading. But these people are far from software for monitoring server farms and digital communications networks.For anything like my paper, the computer science profs just didn’t take the prerequisite courses in grad school, and it’s a bit much to expect them to reinvent that material.Later I saw some work from the Stanford-Berkeley RAD lab (Reliable Adaptive Distributed systems Laboratory) by profs Fox and Patterson and funded by Google, Microsoft, and Sun, and their work looked far inferior to mine.Having “designed and built network security products” isn’t very close to stirring up some theorems and proofs for some best possible anomaly detection.Some of the AI work my work improved on had rules something like:When I see one of these and at least two of these five other things, it looks bad; raise an alarm.Problems included:(1) False Alarm Rate. There was nothing about false alarm rate — the rate was not known in advance and not really adjustable (there is always a wasteful trivial adjustment). In practice, false alarm rate unknown but apparently too high is a major problem because it has the server farm bridge or network operations center staff wasting time in wild goose chases and, then, ignoring too many real detections.(2) Detection Rate. There was no information on getting the best detection rate for the false alarm rate being tolerated. In an important sense, my work gives, for the false alarm rate being tolerated, the best possible detection rate from any means of processing the data.Suppose you are investing in real estate. You have information, including estimates of ROI, on 500 properties. So, …, right, you sort the 500 in descending order on ROI and buy properties down that list until run out of money. That’s essentially classic Neyman-Pearson best possible result and proof. Yes, looked at carefully, it’s a knapsack problem in NP-compete, but that fact is rarely a problem in practice and is not a problem in my work.So, in anomaly detection, the ROI is the detections of real problems, and the money is the false alarms you tolerate. So, you raise alarms in the situations with highest detection rate for given false alarm rate until have spent all the false alarm rate you want to tolerate. This is from just a ugrad math major junior level course in mathematical statistics. Rules are far, far inferior to such things.(3) Deployment Effort. Writing the rules required low level experience and intuitive insight from “experts”; my work needed only some high level views of what data might be relevant for what systems being monitored with no low level experience or intuitive insight required.Uh, about that bridge, “I’m reticent. Yes, I’m reticent.”!! 🙂
Congrats. I look forward to seeing your advances in network security (seriously). It’s a big jump from an academic paper to a product that works reliably in real networks, so color me skeptical, but improvements in state-of-the-art are welcome.Most just don’t pan out in practice.
I wrote some prototype software, ran it on some real data from a “cluster” and some very tricky synthetic data, included the results in the paper, and the false alarm rate calculations came out fine. The synthetic data permitted evaluating detection rates, and they were terrific.For what works in practice, actually just old thresholds work in practice. So there’s really no big question that my techniques will be useful in practice.Deployment on small scale would be trivial to do — write some software where 1000 lines of code is a lot, get some data from the usual sources, e.g., SNMP, Microsoft’s monitoring APIs, some of the data HP has, likely some of the data from SQL Server, Oracle, Cisco, etc.For monitoring a server farm of a square mile, there would be some questions about what all to monitor. For answers there, could do some more research.But the flat situation is this: Not enough people are interested. The main work is done and in the paper; I touted it here, and not for the first time; but even given the work, peer-reviewed, theorems and proofs, lots of intuitive support, there is essentially no real interest.Zero day anomaly detection is a big, practical problem, but for progress people would rather just not be bothered. Instead they will stay with thresholds and signatures and f’get about anything new.The whole subject is necessarily highly theoretical since with zero day we are looking for problems never seen before. Theoretical or not, in practice zero day anomaly detection is serious work once, with my work, we have a good way to do it.Sure, from the values of some other efforts in system monitoring, my work should be the core of a startup worth about $1 billion, quickly. With a good push, nearly no CIO of a major site will feel free not to deploy my work.I’m ready, willing, able, and eager and have been for years.Actually, the new solid state disks would give the techniques a really big shot of caffeine.But to make it work as a business, would likely have to spend some money to look serious. E.g., the main target customers would be the CIOs of some of the most important sites in the world. It is possible to get to such people; e.g., gave a nice presentation at the main NASDAQ site in Trumbull, CT. Got a nice luncheon, etc.Could blow a few hundred thousand dollars before getting to positive cash flow.I can absolutely, positively guarantee you that 99 44/100% of the information technology VCs in the US, no matter what pitch deck send them, will just ignore the contact. They just will NOT consider the technology. Instead the situation is clear: I get customers and revenue growing quickly or f’get about it. They will invest in traction, significant and growing rapidly, and ignore anything and everything about technology before traction.For me, I have another project, startup, potentially much more valuable and much easier and cheaper to get to nice profitability.Sure, I may deploy my anomaly detection work on the servers for my startup. Then, if my startup gets famous and I meet some high end CIOs, maybe at some bent elbow event in Manhattan, and mention my anomaly detection work and they visit my server farm, then maybe I’ll be able to get some anomaly detection sales. Alas, my startup is so much better as a business than anomaly detection that I’ll likely just f’get about it.Net, the US information technology industry and its VCs just will not, Not, NOT, feet locked 6 feet in reinforced concrete, NOT look at new technology. They won’t do it. They will let some zero day attack rob them of private data on millions of people and in response do next to NOTHING. They can see significant businesses threatened with going out of business and do nothing — and they have.Basically the CIOs and VCs are not able to evaluate the promise of new technology. Problem sponsors at NIH, NSF, DARPA, ONR, etc. can and do so routinely — CIOs and VCs, nope.So be it. I learned my lesson. My startup has some nice, new technology, but my business is now easier to start than a grass mowing service, literally.There’s a detection of an anomaly!
Happened across anything crypto related yet, ATMs, trading billboards, payment options at retail outlets?
What are the security risks in being totally discoverable that way, and connecting so easily to such open networks? What if the person in the room next to yours taps into your bluetooth network and spoofs it?
they probably don’t need to be in the next room, or even the same country. that’s the world we live in. nothing is truly secure.
What are the security risks in being totally discoverableYou see that is the thing with security. There are not only things you don’t know but things that you don’t even know exist. Plus people make mistakes in terms of securing things even if they know what they are doing and/or the risks.I was in a hotel a few years ago and was able to (without any hacking) browse files on devices connected to the same wifi network at the hotel. They just show up in the sidebar and some had opened up their laptops in a way that allowed you to see what they had. Plus in many cases you could tell who they were because they named their machines with their names. A curious guy like me finds that type of thing entertaining. I see similar behavior also with some of my neighbors. They name their hotspots with their names. These are people that I don’t know. So I google the name and then see that they live within range of me. No obfuscation at all. So here is my point. I am sure that is not something they even know is happening. No way.Separately I got an email today from a customer yesterday that got one of those bitcoin scam emails. The email claimed that they had broken into his email account because they had sent the email from his email address. Now he didn’t even know that someone could easily fake a reply email address. I think I learned that 22 years ago? So he wanted to know what to do. The email contained a bitcoin address. I just checked and it looks like a few people have been fooled and already sent him $500 in bitcoin (shows as .08).The email was super hokey to me. But ‘normals’ don’t know that. To them it looks legit…. https://uploads.disquscdn.c…
To me it’s very interesting.There many factors let to security:What are the chances you get hacked depends on your value 1. Low but it happens: Don’t care 2. What if the cost is high you do get hacked? a. You have ignorant bliss until it happens b. You prevent by a ton of security c. You prevent by 2b and causing an immense pain to the hacker.We don’t have 2c. For instance, you come in and steal from a Russian Mobster in NYC. Well after he feeds your entire family unit to pigs in front of you he does unspeakable things to your private parts during but prevents you from dying until he inflicts more pain. .We could do more of 2c.
Anyone that makes a Bluetooth device has to pay to license the name – wouldn’t quite call it non-proprietary, just cheap
and do king Harald’s descendants receive a cut of the action?
Randall Munroe a comic about the state of Bluetooth a few weeks back: https://xkcd.com/2055/Jokes aside, we’re not quite there yet but he does lay out a good picture of where we should get to.