source: (submitted by Artemus FAN, Steve Jones...thank you, Steve!  ;-)   )

DroneWatcherAPP .. the APP that turns your Android® smartphone into a drone and small UAV detector. 

Are you concerned that your neighbor is spying on you with their drone? Is your business concerned that your competitor is snooping on your operations, monitoring your yard inventory and incoming and outgoing shipments? Do you have a high level facility such as an airport, prison or industrial plant that needs to monitor and control its airspace from unauthorized drone activity? 

The DroneWatcher APP turns your Android™ smartphone or tablet device into a drone and small UAV detector that detects, tracks, alerts and records information on ~95% of consumer drones using advanced signals intelligence technology. The app detects, tracks and alerts most commercially-available consumer and prosumer drones and records data including the drone type and ID which can be used to document incursions and support apprehension and prosecution by local law enforcement. Note: DroneWatcher APP does not detect small toy drones or professional or military type drones. 

The DroneWatcher APP runs in the background and only provides alerts when a drone is detected within its monitoring range (usually ¼ to ½ miles depending on site conditions). Features and options in this free version include user selectable alerts (visual and audible and 6 sound options plus mute and vibrate alert), ability to stop tracking specific drones and to reset the clear list. The DroneWatcher APP can also be installed and operate on non-cellular Android tablets as long as the device is connected to a local WiFi network. For best detection sensitivity and range of detection performance, when using the app inside a building place the device near an external window or wall. It is also recommended that a device using the app as a stationary sensor (e.g. to protect home or business privacy) that you keep the device plugged into external power. 

In addition to use to protect your privacy at your home or small business, the soon-to-be-released Pro version of the DroneWatcher APP will allow multiple Android devices with the DroneWatcher APP to be networked to provide a wide area monitoring zone for security and privacy for public events (indoor and outdoor concerts, fairs, rallies, etc.), sporting events (NASCAR, stadium sports, golf tournaments, tennis and other outdoor and indoor competitions), airports, hospitals, prisons, power plants, government facilities, industrial sites, and general law enforcement. Pro features will also provide expanded web services including a real-time, interactive web display of the covered area with color-coded monitoring zones, audible and visual monitor alerts, text message alerting, and downloadable data logs. 

The DroneWatcher APP is part of the DroneWatcher system, an advanced multi-layered solution for detection, tracking, alerting and interdiction of consumer, prosumer and commercial drones and small unmanned aerial vehicles (sUAV). DroneWatcher was developed by a US-based, global leader in specialized remote sensing technologies. DroneWatcher combines signals intelligence, a smartphone app and/or radar in a scalable technology to provide a multi-layer level of security specific to each users security and risk tolerance requirements. The technology is upgradable to meet continually evolving drone and sUAV capabilities and interfaces with third party video, acoustic and other technologies and includes an integrated web dataserver to deliver real-time, consolidated user situational awareness displays for individual sites as well as regions (city, state, national).


The Quick Look mechanism on macOS, which allows users to check file contents without actually opening the files, may leak information on cached files, even if they reside on encrypted drives or if the files have been deleted.

According to Apple, “Quick Look enables apps like Finder and Mail to display thumbnail images and full-size previews of Keynote, Numbers, Pages, and PDF documents, as well as images and other types of files.”

Quick Look registers the XPC service, which creates a thumbnails database and stores it in the /var/folders/.../C/ directory.

The issue, discovered by Wojciech Reguła, is that the service creates thumbnails of all supported files located in an accessed folder, regardless of whether the folder resides on an internal or external drive. It does the same for macOS Encrypted HFS+/APFS drives as well.

Because of that, the SQLite database in the directory contains previews, metadata and file paths of photos and other files in the accessed folders, depending on the file type and the installed Quick Look plugins.

Said thumbnails, however, are not created only for the files a user has chosen to preview with Quick Look (which automatically results in the service caching file information), but for other files residing in the accessed folders as well.

While the created thumbnails for previewed files are larger, smaller thumbnails are created for the other files, but even those could be used to leak content, Objective-See’s Patrick Wardle suggests.


Being top choice as an attack vector is likely not a contest any platform wants to win. Unfortunately for Microsoft, Office will not only continue to be the attackers’ vector of choice but will also be the platform for exploiting vulnerabilities, according to a new report from Menlo Security.

After 360 Total Security blogged about “the first APT (Advanced Persistent Threat) campaign that forms its attack with an Office document embedding a newly discovered Internet Explorer 0-day exploit,” Menlo Security researchers sought to understand why attackers were using malicious Office documents for endpoint exploitation.

Malicious Microsoft Office documents attached to emails as an attack delivery mechanism are not new, but the report, Microsoft Office: The New Platform for Exploiting Zero-Days, detailed the latest examples of the growing sophistication of methods being used and highlighted the need for a more foolproof approach to security. 


Even while the paper was being drafted, a new zero-day exploit – CVE-2018-5002 – was disclosed, all while two Flash zero-day vulnerabilities continue to be exploited in the wild.


“There is likely to be an increase in attacks via malevolent email attachments using stealthily embedded, remotely hosted malicious components that leverage application and operating system vulnerabilities, both old and new,” the report stated.


Researchers did find new attack methods, however. One is the use of embedded, remotely hosted malicious components exploiting app and OS vulnerabilities in Word documents delivering zero-day exploits.   


Microsoft Word is the leading cloud office-productivity platform, and it’s popularity is expected to grow. In turn it will, presumably, continue to be the attackers’ vector of choice and the platform most often used to exploit vulnerabilities.


The researchers found that almost all recent zero-day attacks have been delivered via Microsoft Word. “With CVE-2018-8174 and CVE-2018-5002, the attackers leveraged Word as a vector to exploit Adobe Flash Player and Internet Explorer. By using Word as the vector, the attackers were able to exploit a browser, even if it is not the default browser, and exploit Flash, even though Flash is blocked by most enterprises," according to the report.

"Microsoft is therefore undoubtedly going to become the platform that attackers leverage most to deliver their zero-day exploits,” the report conlcuded.


New research uses AI to automate traditional digital forensics

Experts around the world are getting increasingly worried about new AI tools that make it easier than ever to edit images and videos — especially with social media’s power to share shocking content quickly and without fact-checking. Some of those tools are being developed by Adobe, but the company is also working on an antidote of sorts by researching how machine learning can be used to automatically spot edited pictures.  

The company’s latest work, showcased this month at the CVPR computer vision conference, demonstrates how digital forensics done by humans can be automated by machines in much less time. The research paper does not represent a breakthrough in the field, and it’s not yet available as a commercial product, but it’s interesting to see Adobe — a name synonymous with image editing — take an interest in this line of work.

Speaking to The Verge, a spokesperson for the company said that this was an “early-stage research project,” but in the future, the company wants to play a role in “developing technology that helps monitor and verify authenticity of digital media.” Exactly what this might mean isn’t clear, since Adobe has never before released software designed to spot fake images. But, the company points to its work with law enforcement (using digital forensics to help find missing children, for example) as evidence of its responsible attitude toward its technology.

The new research paper shows how machine learning can be used to identify three common types of image manipulation: splicing, where two parts of different images are combined; cloning, where objects within an image are copy and pasted; and removal, when an object is edited out altogether.


To spot this sort of tampering, digital forensics experts typically look for clues in hidden layers of the image. When these sorts of edits are made, they leave behind digital artifacts, like inconsistencies in the random variations in color and brightness created by image sensors (also known as image noise). When you splice together two different images, for example, or copy and paste an object from one part of an image to another, this background noise doesn’t match, like a stain on a wall covered with a slightly different paint color.

As with many other machine learning systems, Adobe’s was taught using a large dataset of edited images. From this, it learned to spot the common patterns that indicate tampering. It scored higher in some tests than similar systems built by other teams, but not dramatically so. However, the research has no direct application in spotting deepfakes, a new breed of edited videos created using artificial intelligence.

“The benefit of these new ML approaches is that they hold the potential to discover artifacts that are not obvious and not previously known,” digital forensics expert Hany Farid told The Verge. “The drawback of these approaches is that they are only as good as the training data fed into the networks, and are, for now at least, less likely to learn higher-level artifacts like inconsistencies in the geometry of shadows and reflections.”

These caveats aside, it’s good to see more research being done that can help us spot digital fakes. If those sounding the alarm are right and we’re headed to some sort of post-truth world, we’re going to need all the tools we can get to sort fact from fiction. AI can hurt, but it can help as well.