The Boston Red Sox admitted to eavesdropping on the communications channel between catcher and pitcher.
Stealing signs is believed to be particularly effective when there is a runner on second base who can both watch what hand signals the catcher is using to communicate with the pitcher and can easily relay to the batter any clues about what type of pitch may be coming. Such tactics are allowed as long as teams do not use any methods beyond their eyes. Binoculars and electronic devices are both prohibited.
In recent years, as cameras have proliferated in major league ballparks, teams have begun using the abundance of video to help them discern opponents' signs, including the catcher's signals to the pitcher. Some clubs have had clubhouse attendants quickly relay information to the dugout from the personnel monitoring video feeds.
But such information has to be rushed to the dugout on foot so it can be relayed to players on the field -- a runner on second, the batter at the plate -- while the information is still relevant. The Red Sox admitted to league investigators that they were able to significantly shorten this communications chain by using electronics. In what mimicked the rhythm of a double play, the information would rapidly go from video personnel to a trainer to the players.
This is ridiculous. The rules about what sorts of sign stealing are allowed and what sorts are not are arbitrary and unenforceable. My guess is that the only reason there aren't more complaints is because everyone does it.
The Red Sox responded in kind on Tuesday, filing a complaint against the Yankees claiming that the team uses a camera from its YES television network exclusively to steal signs during games, an assertion the Yankees denied.
Boston's mistake here was using a very conspicuous Apple Watch as a communications device. They need to learn to be more subtle, like everyone else.
A number of them voiced their distrust in emails to one another, seen by Reuters, and in written comments that are part of the process. The suspicions stem largely from internal NSA documents disclosed by Snowden that showed the agency had previously plotted to manipulate standards and promote technology it could penetrate. Budget documents, for example, sought funding to "insert vulnerabilities into commercial encryption systems."
More than a dozen of the experts involved in the approval process for Simon and Speck feared that if the NSA was able to crack the encryption techniques, it would gain a "back door" into coded transmissions, according to the interviews and emails and other documents seen by Reuters.
"I don't trust the designers," Israeli delegate Orr Dunkelman, a computer science professor at the University of Haifa, told Reuters, citing Snowden's papers. "There are quite a lot of people in NSA who think their job is to subvert standards. My job is to secure standards."
I don't trust the NSA, either.
New York Times reporter Charlie Savage writes about some bad statistics we're all using:
Among surveillance legal policy specialists, it is common to cite a set of statistics from an October 2011 opinion by Judge John Bates, then of the FISA Court, about the volume of internet communications the National Security Agency was collecting under the FISA Amendments Act ("Section 702") warrantless surveillance program. In his opinion, declassified in August 2013, Judge Bates wrote that the NSA was collecting more than 250 million internet communications a year, of which 91 percent came from its Prism system (which collects stored e-mails from providers like Gmail) and 9 percent came from its upstream system (which collects transmitted messages from network operators like AT&T).
These numbers are wrong. This blog post will address, first, the widespread nature of this misunderstanding; second, how I came to FOIA certain documents trying to figure out whether the numbers really added up; third, what those documents show; and fourth, what I further learned in talking to an intelligence official. This is far too dense and weedy for a New York Times article, but should hopefully be of some interest to specialists.
Worth reading for the details.
This is a good interview with Apple's SVP of Software Engineering about FaceID.
Honestly, I don't know what to think. I am confident that Apple is not collecting a photo database, but not optimistic that it can't be hacked with fake faces. I dislike the fact that the police can point the phone at someone and have it automatically unlock. So this is important:
I also quizzed Federighi about the exact way you "quick disabled" Face ID in tricky scenarios -- like being stopped by police, or being asked by a thief to hand over your device.
"On older phones the sequence was to click 5 times [on the power button], but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while -- we'll take you to the power down [screen]. But that also has the effect of disabling Face ID," says Federighi. "So, if you were in a case where the thief was asking to hand over your phone -- you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID."
That squeeze can be of either volume button plus the power button. This, in my opinion, is an even better solution than the "5 clicks" because it's less obtrusive. When you do this, it defaults back to your passcode.
It's worth noting a few additional details here:
- If you haven't used Face ID in 48 hours, or if you've just rebooted, it will ask for a passcode.
- If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode -- it tried to read the people setting the phones up on the podium.)
- Developers do not have access to raw sensor data from the Face ID array. Instead, they're given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.
- You'll also get a passcode request if you haven't unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasn't unlocked it in 4 hours.
Also be prepared for your phone to immediately lock every time your sleep/wake button is pressed or it goes to sleep on its own. This is just like Touch ID.
Federighi also noted on our call that Apple would be releasing a security white paper on Face ID closer to the release of the iPhone X. So if you're a researcher or security wonk looking for more, he says it will have "extreme levels of detail" about the security of the system.
Here's more about fooling it with fake faces:
Facial recognition has long been notoriously easy to defeat. In 2009, for instance, security researchers showed that they could fool face-based login systems for a variety of laptops with nothing more than a printed photo of the laptop's owner held in front of its camera. In 2015, Popular Science writer Dan Moren beat an Alibaba facial recognition system just by using a video that included himself blinking.
Hacking FaceID, though, won't be nearly that simple. The new iPhone uses an infrared system Apple calls TrueDepth to project a grid of 30,000 invisible light dots onto the user's face. An infrared camera then captures the distortion of that grid as the user rotates his or her head to map the face's 3-D shape -- a trick similar to the kind now used to capture actors' faces to morph them into animated and digitally enhanced characters.
It'll be harder, but I have no doubt that it will be done.
I am not planning on enabling it just yet.
Adventurer! Our fellowship of Necropoli Centauri Voyagers returns with the brand-new Bundle of Lamentations +2, our second offer featuring the weird-fantasy tabletop roleplaying game Lamentations of the Flame Princess. With a heavy-metal attitude and explicit non-work-safe illustrations, LotFP presents a horrific, ultra-violent twist on fantasy gaming. But for fearless gamers who are over 18 and have strong stomachs, the Lamentations line presents some of the hobby's most imaginative work from leading designers. This new LotFP collection (following last July's first spectacular offer, our top seller of 2016) includes .PDF ebooks of many recent award-winners like Broodmother Skyfortress, Blood in the Chocolate, and the amazing Veins in the Earth.
For just US$14.95 you get all eight titles in our Weird Starter Collection (retail value $62.50) as DRM-free .PDF ebooks:
A bunch of Bluetooth vulnerabilities are being reported, some pretty nasty.
BlueBorne concerns us because of the medium by which it operates. Unlike the majority of attacks today, which rely on the internet, a BlueBorne attack spreads through the air. This works similarly to the two less extensive vulnerabilities discovered recently in a Broadcom Wi-Fi chip by Project Zero and Exodus. The vulnerabilities found in Wi-Fi chips affect only the peripherals of the device, and require another step to take control of the device. With BlueBorne, attackers can gain full control right from the start. Moreover, Bluetooth offers a wider attacker surface than WiFi, almost entirely unexplored by the research community and hence contains far more vulnerabilities.
Airborne attacks, unfortunately, provide a number of opportunities for the attacker. First, spreading through the air renders the attack much more contagious, and allows it to spread with minimum effort. Second, it allows the attack to bypass current security measures and remain undetected, as traditional methods do not protect from airborne threats. Airborne attacks can also allow hackers to penetrate secure internal networks which are "air gapped," meaning they are disconnected from any other network for protection. This can endanger industrial systems, government agencies, and critical infrastructure.
Finally, unlike traditional malware or attacks, the user does not have to click on a link or download a questionable file. No action by the user is necessary to enable the attack.
Fully patched Windows and iOS systems are protected; Linux coming soon.