source: securityweek.com

A team of researchers has demonstrated that hackers can modify 3D medical scans to add or remove evidence of a serious illness, such as cancer.

Experts from the Ben-Gurion University and the Soroka University Medical Center, Beer-Sheva, in Israel have developed proof-of-concept (PoC) malware that uses a machine learning technique known as generative adversarial network (GAN) to quickly alter 3D images generated during a Computer Tomography (CT) scan.

CT scanners are typically managed through a picture archiving and communication system (PACS) that receives scans from the scanner, stores them and then supplies them to radiologists. Data is transmitted and stored using a standard format named DICOM. PACS products are provided by companies such as GE Healthcare, Fujifilm, Philips and RamSoft.

One problem, according to researchers, is that PACS and DICOM servers are often left exposed to the internet. A scan conducted using the Shodan search engine identified nearly 2,700 servers that were connected to the internet. Another issue is that the medical imagery data is in many cases transmitted over the network without being encrypted, which exposes it to man-in-the-middle (MitM) attacks and manipulation.

Malicious actors could directly target PACS that are accessible from the Internet, or they could first gain access to the targeted organization’s network and launch the attack from there. Another attack vector, which the researchers tested during a penetration test conducted in a hospital’s radiology department, involves physically connecting a small MitM device between the CT scanner’s workstation and the PACS network. In these local attacks, the attacker can rely on insiders or they can pose as a technician, the researchers said.

 source: cyware.com

  • These apps detected only a handful of malware samples and other malicious instances on Android devices.
  • The study was carried out on 250 anitvirus security apps from the Google Play Store.

A new study by an antivirus-testing firm has revealed an astonishing number of security apps that do not perform as they claim to. According to AV-Comparatives which conducted this test on 250 security apps from Google Play Store, most of them were either unreliable or had a faulty software implementation.

In their findings, the firm also emphasized that many redundant apps had been developed by entities with no specific focus on security.

The big picture

  • The automated test considered 2000 Android malware threats that were common in 2018. Additionally, 100 safe files were also included.
  • Over 500,000 test runs were performed for the study. The test also featured a basic false-alarm task to check if security apps label every app as malicious.
  • Out of 250 apps which were tested, only 80 detected more than 30 percent of malware that was handed to them.
  • Among them, most of the apps had detection rates faring between 90 to 100 percent.
  • On the other hand, 138 apps detected less than 30 percent of malware samples and displayed false alarms for safe files.
  • The remaining 32 apps have been removed from Play Store as these apps were identified as ‘Potentially Unwanted Applications’.

Not just ineffective but risky - The test report mentioned that these ineffective apps were likely dangerous.

“A number of the above apps have in the meantime already been detected either as Trojans, dubious/fake AVs, or at least as “potentially unwanted applications” (PUA) by several reputable mobile security apps. It is to be expected that Google will remove most of them from the Google Play Store in the coming months (and hopefully enhance their verification checks, thus blocking other such apps from the store),” the AV-Comparatives’ blog indicated.

Why user ratings may not mean much - The firm has also suggested mobile users to avoid blindly trusting the user ratings or availability of latest updates for the AV app to determine the effectiveness of the app.

Instead, it has advised users to try out AV apps before buying them, along with checking relevant factors such as privacy policy, app permissions, and developer information.

 source: theverge.com

You’ve seen the prototypes, but now Ikea has set the date to show off its first Sonos-powered Symfonisk speakers. The “products” (plural) will be revealed on April 9th in Milan, before they’re expected to go on sale in August.

“Ikea and Sonos have showed a prototype to the world,” reads the Ikea press release, “a book-shelf speaker that will give customers a great connected speaker that enables a multi-functional usage in the home, at an affordable price.”

The cheapest Sonos speaker — the Play:1 — currently sells for $149.

[see the video at https://www.theverge.com/2019/3/15/18266992/ikea-symfonisk-sonos-speaker-date-announcement]

A teaser video posted by Ikea, shows two digitally masked speakers: one hanging on the kitchen wall (or is it perched on a shelf?) and another next to a sofa on a side table. Or maybe we’re looking at a single speaker with multiple mounting options. Either way, what we’ll see in April will be the first in what’s likely to be a series of speakers, just as we’ve seen with Ikea’s burgeoning Tradfri range of low-priced smart lights, dimmers, switches, and, soon, controllers for electric blinds.

Sonos has said that the Ikea speakers will fully integrate with Sonos’ existing range of wireless speakers, as well as Ikea’s Tradfri range of smart devices. The Ikea and Sonos Feel Home exhibition will run between April 9th and 14th as part of Milan Design Week.

 source: defenseone.com

A new effort to build patrol drones for urban fights began by forming an ethics advisory board.

DARPA program seeks AI-infused drones that can help prevent friendly fire and civilian casualties in urban battles. But the truly innovative part of the URSA effort might be the inclusion of ethics advisors from the very start.

“It’s the first time we’ve considered this program approach. Certainly [before] we might consider it at the end,” said Lt. Col. Philip Root, program manager for the Urban Reconnaissance through Supervised Autonomy program. “It’s not that program managers shy away from that, but here we wanted to try something different which was invoke this analysis early on, and it’s proven to be absolutely essential.”

Root said URSA aims to collect information about people in complex warfighting environments, in order to help humans understand who is a threat.

“We really want to try to ensure we allow non-hostiles, non-combatants, to move out of the way. Future urban conflict is going to take place in large cities where the population can’t just go to the mountains,” Root said. “So we have to consider that all this is going to occur around people who don’t want to be there.”

Root said the development of such technology that interacts with humans is “fraught with legal, moral, and ethical implications,” which is why the ethics team was involved in the outset.

“We met [with the ethicists] even before we had technical performers on contract to begin thinking about the ethical problems we have and actually putting it on paper,” Root said. “Don’t know if the technical [side] will actually work, but we know it will be far more ethical and aligned with our national ethos.”