source: wired.com

AUTOMATIC LICENSE PLATE reader cameras are controversial enough when law enforcement deploys them, given that they can create a panopticon of transit throughout a city. Now one hacker has found a way to put a sample of that power—for safety, he says, and for surveillance—into the hands of anyone with a Tesla and a few hundred dollars to spare.

At the Defcon hacker conference today, security researcher Truman Kain debuted what he calls the Surveillance Detection Scout. The DIY computer fits into the middle console of a Tesla Model S or Model 3, plugs into its dashboard USB port, and turns the car's built-in cameras—the same dash and rearview cameras providing a 360-degree view used for Tesla's Autopilot and Sentry features—into a system that spots, tracks, and stores license plates and faces over time. The tool uses open source image recognition software to automatically put an alert on the Tesla's display and the user's phone if it repeatedly sees the same license plate. When the car is parked, it can track nearby faces to see which ones repeatedly appear. Kain says the intent is to offer a warning that someone might be preparing to steal the car, tamper with it, or break into the driver's nearby home.

 
CLICK HERE TO WATCH THE VIDEO.

Despite the obvious privacy concerns, Kain pitches his invention primarily as a helpful tool for Tesla owners who rate above average on the paranoia spectrum. "It turns your Tesla into an AI-powered surveillance station," Kain says. "It's meant to be another set of eyes, to help out and tell you it's seen a license plate following you over multiple days, or even multiple turns of a single trip."

 source: technewsworld.com

The latest Windows 10 update from Microsoft hasn't gone smoothly for many users due to a conflict with an outdated driver for Intel's Rapid Storage Technology. This is somewhat ironic in that RST was designed to improve the performance of storage hardware. Certain versions have slowed the install process, and in some cases the program actually has made installation of the Version 1903 Windows 10 update impossible.

Some users have been able to download a later version of RST and get the update to work, but for many others even that won't solve the problem. For now the best course of action may be simply to wait and hope the two companies can resolve the conflict.

This is just the latest such case of software conflicts users have experienced since Windows 10's release four years ago. Microsoft has said it will be the last version of the desktop/laptop operating system. Instead of major jumps to a completely new version, Windows will get regular updates. In theory that should limit conflicts, but in practice there have been numerous problems.

Just this year Windows 10 users encountered problems with Netflix. In one case, a Netflix app wouldn't update to reflect the latest releases. Also, the May 2019 update of Windows 10 resulted in audio glitches for many users.

Such problems are not isolated to Microsoft's Windows 10 however. There are similar problems with many apps and even hardware devices that require routine updates.

 source: nytimes.com

SAN FRANCISCO — Dozens of databases of people’s faces are being compiled without their knowledge by companies and researchers, with many of the images then being shared around the world, in what has become a vast ecosystem fueling the spread of facial recognition technology.

The databases are pulled together with images from social networks, photo websites, dating services like OkCupid and cameras placed in restaurants and on college quads. While there is no precise count of the data sets, privacy activists have pinpointed repositories that were built by Microsoft, Stanford University and others, with one holding over 10 million images while another had more than two million.

The face compilations are being driven by the race to create leading-edge facial recognition systems. This technology learns how to identify people by analyzing as many digital pictures as possible using “neural networks,” which are complex mathematical systems that require vast amounts of data to build pattern recognition.

Tech giants like Facebook and Google have most likely amassed the largest face data sets, which they do not distribute, according to research papers. But other companies and universities have widely shared their image troves with researchers, governments and private enterprises in Australia, China, India, Singapore and Switzerland for training artificial intelligence, according to academics, activists and public papers.

Companies and labs have gathered facial images for more than a decade, and the databases are merely one layer to building facial recognition technology. But people often have no idea that their faces are in them. And while names are typically not attached to the photos, individuals can be recognized because each face is unique to a person

A visualization of 2,000 of the identities included in the MS Celeb database from Microsoft.CreditOpen Data Commons Public Domain Dedication and License, via Megapixels

 source: wired.com

WHEN YOU THINK about how hackers could break into your smartphone, you probably imagine it would start with clicking a malicious link in a text, downloading a fraudulent app, or some other way you accidentally let them in. It turns out that's not necessarily so—not even on the iPhone, where simply receiving an iMessage could be enough to get yourself hacked.

At the Black Hat security conference in Las Vegas on Wednesday, Google Project Zero researcher Natalie Silvanovich is presenting multiple so-called “interaction-less” bugs in Apple’s iOS iMessage client that could be exploited to gain control of a user’s device. And while Apple has already patched six of them, a few have yet to be patched.

“These can be turned into the sort of bugs that will execute code and be able to eventually be used for weaponized things like accessing your data,” Silvanovich says. “So the worst-case scenario is that these bugs are used to harm users.”

Silvanovich, who worked on the research with fellow Project Zero member Samuel Groß, got interested in interaction-less bugs because of a recent, dramatic WhatsApp vulnerability that allowed nation-state spies to compromise a phone just by calling it—even if the recipient didn’t answer the call.