Forensics Blog

Casos Forenses

Análisis y comentarios de casos reales.

Malware

Reversing, QuickScan, Análisis dinámico y estático.

Cybercrimen Digital

Actividades delictivas realizadas con la ayuda de herramientas informáticas.

Análisis forense dispositivos móviles

Se involucran la identificación, preservación, obtención, documentación y análisis de información de dispositivos móviles.

iT Forensics, Hacking, Crimen digital

Enfoque en artículos y documentación relacionada a cibercrimen.

Posts recientes

28 feb 2015

Lorenzo Martínez - COOKING AN APT IN THE PARANOID WAY - EKOPARTY 2014

Lorenzo Martinez - CSI Workshop Ekoparty 2014


26 feb 2015

Lorenzo Martínez - CSI MADRID -Workshop Ekoparty 2014

25 feb 2015

[Video] Basic Guide to Advanced Incident Response

23 feb 2015

Video - Técnicas de Adquisición Forenses en vivo

Existen muchas herramientas comerciales para sacar esta información en caliente, pero son costosas y no siempre logran su objetivo completamente. En este video se repasan algunos métodos útiles y elementos utilizados para adquirir rápidamente la evidencia digital y se comparten algunos scripts de automatización de código abierto para ayudar en el proceso de adquisición


9 feb 2015

Distintas herramientas para extracción y análisis de memoria en Linux

Como todos sabéis, en un análisis forense es tremendamente útil la obtención del volcado de memoria volátil y su posterior análisis, sobretodo porque muchos artefactos de malware usan funciones que no dejan datos en disco. 

Quizás la mayoría de herramientas de adquisición y análisis de memoria estaban orientadas a sistemas en Windows porque durante años ha sido el gran objetivo de los "códigos maliciosos". No obstante con el gran auge de Android y otros sistemas Linux/Unix, esta tendencia está cambiando y se hace necesario saber utilizar y tener a mano algunas de las siguientes herramientas:

1. Volatility Framework: quizás una de las más famosas colecciones de herramientas para la extración y el análisis de la memoria volatil (RAM). Sin embargo el soporte para Linux es todavía experimental: ver la página LinuxMemoryForensics en el Volatility wiki. (Licencia GNU GPL) 

2. Idetect (Linux): una vieja implementación para el análisis de la memoria en Linux.

3. LiME (Linux Memory Extractor): presentado en la ShmooCon 2012, es un módulo cargable para el kernel (LKM) y permite la adquisión de memoria incluso en Android.

4. Draugr: interesante herramienta que puede buscar símbolos del kernel (patrones en un fichero XML o con EXPORT_SYMBOL), procesos (información y secciones) (por la lista de enlaces del kernel o por fuerza bruta) y desensamblar/volcar la memoria.

5. Volatilitux: framework en Python equivalente a Volatility en Linux. Soporta arquitecturas ARM, x86 y x86 con PAE activado.

6. Memfetch: sencilla utilidad para volcar la memoria de procesos en ejecución o cuando se descubre una condición de fallo (SIGSEGV).

7. Crash de Red Hat: es una herramienta independiente para investigar tanto los sistemas en funcionamiento como los volcados de memoria del kernel hechos con lo paquetes de Red Hat netdump, diskdump o kdump. También se puede utilizar para el análisis forense de memoria.

8. Memgrep: sencilla utilidad para buscar/reemplazar/volcar memoria de procesos en ejecución y ficheros core.

9. Memdump: se puede utilizar para volcar la memoria del sistema al stream de salida, saltando los huecos de los mapas de la memoria. Por defecto vuelca el contenido de la memoria física (/dev/mem). Se distribuye bajo la Licencia Pública de IBM. 

10. Foriana: esta herramienta es útil para la extracción de información de procesos y listas de módulos desde una imagen de la RAM con la ayuda de las relaciones lógicas entre las estructuras del sistema operativo.

11. Forensic Analysis Toolkit (FATKit): un nuevo framework multiplataforma y modular diseñado para facilitar la extracción, análisis, agregación y visualización de datos forenses en varios niveles de abstracción y complejidad de datos.

12. The Linux Memory Forensic Acquisition (Second Look): esta herramienta es una solución comercial con un driver de crashing modificado y scripts para volcado.

13. http://valgrind.org/

14. http://www.porcupine.org/forensics/tct.html

Fuente: 
Hackplayers

7 feb 2015

E-mail Forensics in a Corporate Exchange Environment

While most e-mail investigations make use of 3rd-party tools to analyses Outlook data, this article series will explore a few basic methods on how a forensics investigator can gather and analyze data related to an e-mail investigation in an Exchange 2010, 2013 and/or Online environments using information provided by Exchange features or using MFCMapi.
If you would like to read the other parts in this article series please go to:

Introduction

E-mail is the most utilized form of communication for businesses and individuals nowadays, and a critical system for any organization. From meeting requests to the distribution of documents and general conversation, it is very hard, if not impossible, to find an organization of any size that does not rely on e-mail. A report from the market research firmRadicati Group, states that in 2011 there were 3.1 billion active e-mail accounts in the world (an increase of 5% over 2010). The report also noted that corporate employees sent and received 105 e-mails a day on average. Royal Pingdom, which monitors the Internet usage, stated that in 2010, 107 trillion e-mails were sent. That is 294 billion e-mails sent per day! With a quarter of the average worker’s day spent in reading and replying to e-mails, it is easy to see the importance of e-mail in today’s world.
Unfortunately, e-mail communication is often exposed to illegitimate uses due to mainly two inherent limitations:
  1. There is rarely no encryption at the sender end and/or integrity checks at the recipient end;
  2. The widely used e-mail protocol Simple Mail Transfer Protocol [SMTP] lacks a source authentication mechanism. Worse, the metadata in the header of an e-mail which contains information about the sender and the path which the message travelled can easily be forged.
Some common examples of these illegitimate uses are spam, phishing, cyber bullying, racial abuse, disclosure of confidential information, child pornography and sexual harassment. In the vast majority of these e-mail cybercrimes the tactics used vary from simple anonymity to impersonation and identity theft.
Although there have been many attempts into securing e-mail systems, most are still inadequately secured. Installing antiviruses, filters, firewalls and scanners is simply not enough to secure e-mail communications. Most companies have a good e-mail policy in place, but it is not enough to prevent users from breaching it and, as such, monitoring is put in place in case the need for investigation arises. However, in some cases all of this does not provide the information needed... This is why Forensic Analysis plays a major role by examining suspected e-mail accounts in an attempt to gather evidence to prosecute criminals in the court of law. To achieve this, a forensic investigator needs efficient tools and techniques to perform the analysis with a high degree of accuracy and in a timely fashion.
Businesses often depend on forensics analysis to prove their innocence in a lawsuit or to establish if a particular user disclosed private information for example. When someone or even the whole company is being investigated, it is imperative that all information is thoroughly analyzed as offenders will always use dubious methods in order to not get caught.

Scenario Information

To help exploring situations where users misuse an e-mail system and a forensics investigator is performing analysis on the system to determine what exactly happened, three fictional scenarios were created and used throughout this article:
ScenarioE-mail   SubjectOffender   Innocent?Notes
1 - DrinksDrinksYesVictim changed e-mail body in order to frame offender.
2 - LunchLunch?NoOffender sends inappropriate e-mail to victim.
3 - DinnerDinner TonightYesE-mail with inappropriate content sent to victim by hacker using SendAs permissions to impersonate Offender.
Table 1
Involved in these scenarios are three fictional characters whose names also categorize their role:
  • Offender – a user who sent an inappropriate e-mail to a work colleague (Victim);
  • Victim – in scenarios 2 and 3, this user received inappropriate e-mails. In scenario 1 she is actually the criminal pretending to be a victim;
  • Hacker – a user who managed to gain access to Offender’s mailbox and sent an inappropriate e-mail to Victim (could simply be co-worker).

Identification and Extraction of Data

The first steps in any e-mail investigation are to identify all the potential sources of information and how e-mail servers and clients are used in the organization. These servers are no longer just to send and receive simple messages. They have expanded into full databases, document repositories, contact and calendar managers with many other uses. Organizations use these powerful messaging servers to manage workflow, communicate with employees and customers, and to share data. A skilled e-mail forensic investigator will identify how the messaging system is being used far beyond e-mail, as an investigation often involves other items such as calendar appointments, for example.
Forensic analysis of a messaging system often produces significant information about users and the organization itself. Nowadays this is much more than simply looking at e-mail messages.

Exchange Analysis

Every Exchange forensic analysis should start on the Exchange system itself. If the required information is not available on Exchange, then a deeper analysis at the client side is typically performed.
Laptop, desktop and servers once played a supporting role in the corporate environment: shutting them down for traditional forensic imaging tended to have only a minor impact on the company. However, in today’s business environment, shutting down servers can have tremendously negative impacts on the company. In many instances, the company’s servers are not just supporting the business – they are the business. The availability of software tools and methodologies capable of preserving data from live, running servers means that it is no longer absolutely necessary to shut down a production e-mail server in order to preserve data from it. A good set of tools and a sound methodology allow investigators to strike a balance between the requirements for a forensically sound preservation process and the business imperative of minimizing impact on normal operations during the preservation process.
To preserve e-mail from a live Microsoft Exchange server, forensic investigators typically take one of several different approaches, depending on the characteristics of the misuse being investigated. Those approaches might include:
  • Exporting a copy of a mailbox from the server using the Microsoft Outlook e-mail client, the Exchange Management Shell or a specialized 3rd-party tool;
  • Obtaining a backup copy of the entire Exchange Server database from a properly created full backup of the server;
  • Temporarily bringing the Exchange database(s) offline to create a copy;
  • Using specialised software such as F-Response or EnCase Enterprise to access a live Exchange server over the network and copying either individual mailboxes or an entire Exchange database file.
Each approach has its advantages and disadvantages. When exporting a mailbox, some e-mail properties get updated with the date and time of the export, which in certain cases means the loss of important information as we shall see.
One of the most complete collections from an Exchange server is to collect a copy of the mailbox database files. The main advantage in this case is that the process preserves and collects all e-mail in the store for all users with accounts on the server. If during the course of the investigation it becomes apparent that new users should be added to the investigation, then those users’ mailboxes have already been preserved and collected.
Traditionally, the collection of these files from live servers would require shutting down e-mail server services for a period of time because files that are open for access by Exchange cannot typically be copied from the server. This temporary shutdown can have a negative impact on the company and the productivity of its employees. In some cases, a process like this is scheduled to be done out of hours or over a weekend to further minimize impact on the company.
Some 3rd-party software utilities can also be used to access the live Exchange server over the network and to preserve copies of the files comprising the information store.
Another approach to collecting mailbox database files is to collect a recent full backup of Exchange, if there is one. Once these files are preserved and collected, there are a number of 3rd-party utilities on the market today that can extract mailboxes from them, such as Kernel Exchange EDB Viewer or Kernel EDB to PST.
A different approach that is becoming more and more important, is to use features of Exchange to perform the investigation. Exchange has a number of features such as audit logs or In-Place Hold that help, amongst other purposes, the investigation of misuse by keeping a data intact and a detailed log of actions performed in the messaging system.

Conclusion

In the first part of this article series, we looked at the importance of e-mail and forensics investigation, the scenarios we will be using, and how data is often collected and preserved from an Exchange environment. In the next article we will start looking at extracting data using Exchange features.
If you would like to read the other parts in this article series please go to:


6 feb 2015

OSXCollector: Forensic Collection and Automated Analysis for OS X

Introducing OSXCollector

We use Macs a lot at Yelp, which means that we see our fair share of Mac-specific security alerts. Host based detectors will tell us about known malware infestations or weird new startup items. Network based detectors see potential C2 callouts or DNS requests to resolve suspicious domains. Sometimes our awesome employees just let us know, “I think I have like Stuxnet or conficker or something on my laptop.”
When alerts fire, our incident response team’s first goal is to “stop the bleeding” – to contain and then eradicate the threat. Next, we move to “root cause the alert” – figuring out exactly what happened and how we’ll prevent it in the future. One of our primary tools for root causing OS X alerts is OSXCollector.
OSXCollector is an open source forensic evidence collection and analysis toolkit for OS X. It was developed in-house at Yelp to automate the digital forensics and incident response (DFIR) our crack team of responders had been doing manually.

Performing Forensics Collection

The first step in DFIR is gathering information about what’s going on – forensic artifact collection if you like fancy terms. OSXCollector gathers information from plists, sqlite databases and the local filesystem then packages them in an easy to read and easier to parse JSON file.
osxcollector.py is a single Python file that runs without any dependencies on a standard OS X machine. This makes it really easy to run collection on any machine – no fussing with brew, pip, config files, or environment variables. Just copy the single file onto the machine and run it. sudo osxcollector.py is all it takes.
123
$ sudo osxcollector.py
Wrote 35394 lines.
Output in osxcollect-2014_12_21-08_49_39.tar.gz
view rawsample hosted with ❤ by GitHub

Details of Collection

The collector outputs a .tar.gz containing all the collected artifacts. The archive contains a JSON file with the majority of information. Additionally, a set of useful logs from the target system logs are included.
The collector gathers many different types of data including:
  • install history and file hashes for kernel extensions and installed applications
  • details on startup items including LaunchAgents, LaunchDaemons, ScriptingAdditions, and other login items
  • OS quarantine, the information OS X uses to show ‘Are you sure you wanna run this?’ when a user is trying to open a file downloaded from the internet
  • file hashes and source URL for downloaded files
  • a snapshot of browser history, cookies, extensions, and cached data for Chrome, Firefox, and Safari
  • user account details
  • email attachment hashes
The docs page on GitHub contains a more in depth description of collected data.

Performing Basic Forensic Analysis

Forensic analysis is a bit of an art and a bit of a science. Every analyst will see a bit of a different story when reading the output from OSXCollector – that’s part of what makes analysis fun.
Generally, collection is performed on a target machine because something is hinky: anti-virus found a file it doesn’t like, deep packet inspect observed a callout, endpoint monitoring noticed a new startup item, etc. The details of this initial alert – a file path, a timestamp, a hash, a domain, an IP, etc. – is enough to get going.
OSXCollector output is very easy to sort, filter, and search for manual forensic analysis. By mixing a bit of command-line-fu with some powerful tools like like grep and jq a lot of questions can be answered. Here’s just a few examples:
Get everything that happened around 11:35
1
$ cat INCIDENT32.json | grep '2014-01-01 11:3[2-8]'
view rawfind_by_time.sh hosted with ❤ by GitHub
Just the URLs from that time period
1
$ cat INCIDENT32.json | grep '2014-01-01 11:3[2-8]' | jq 'select(has("url"))|.url'
view rawfind_by_time_url.sh hosted with ❤ by GitHub
Just details on a single user
1
$ cat INCIDENT32.json | jq 'select(.osxcollector_username=="ivanlei")|.'
view rawfind_user.sh hosted with ❤ by GitHub

Performing Automated Analysis with OutputFilters

Output filters process and transform the output of OSXCollector. The goal of filters is to make it easy to analyze OSXCollector output. Each filter has a single purpose. They do one thing and they do it right.
For example, the FindDomainsFilter does just what it sounds like: it finds domain names within a JSON entry. The domains are added as a new key to the JSON entry. For example, given the input:
12345
{
"visit_time": "2014-10-16 09:44:57",
"title": "Pizza New York, NY",
"url": "http://www.yelp.com/search?find_desc=pizza&find_loc=NYC"
}
view rawfilter_input.json hosted with ❤ by GitHub
the FindDomainsFilter would add an osxcollector_domains key to the output:
123456
{
"visit_time": "2014-10-16 09:44:57",
"title": "Pizza New York, NY",
"url": "http://www.yelp.com/search?find_desc=pizza&find_loc=NYC",
"osxcollector_domains": ["yelp.com","www.yelp.com"]
}
view rawfilter_output.json hosted with ❤ by GitHub
This enhanced JSON entry can now be fed into additional OutputFilters that perform actions like matching domains against a blacklist or querying a passive DNS service for domain reputation information.

Basic Filters

FindDomainsFilter

Finds domain names in OSXCollector output and adds an osxcollector_domains key to JSON entries.

FindBlacklistedFilter

Compares data against user defined blacklists and adds an osxcollector_blacklist key to matching JSON entries.
Analysts should create blacklists for domains, file hashes, file names, and any known hinky stuff.

RelatedFilesFilter

Breaks an initial set of file paths into individual file and directory names and then greps for these terms. The RelatedFilesFilter is smart and ignores usernames and common terms like bin orLibrary.
This filter is great for figuring out how evil_invoice.pdf landed up on a machine. It’ll find browser history, quarantines, email messages, etc. related to a file.

ChromeHistoryFilter and FirefoxHistoryFilter

Builds a really nice browser history sorted in descending time order. The output is comparable to looking at the history tab in the browser but contains more info such as whether the URL was visited because of a direct user click or visited in a hidden iframe.

Threat API Filters

OSXCollector output typically has thousands of potential indicators of compromise like domains, urls, and file hashes. Most are benign; some indicate a serious threat. Sorting the wheat from the chaff is quite a challenge. Threat APIs like OpenDNS, VirusTotal, and ShadowServer use a mix confirmed intelligence information with heuristics to augment and classify indicators and help find the needle in the haystack.

OpenDNS RelatedDomainsFilter

Looks up an initial set of domains and IP with the OpenDNS Umbrella API and finds related domains. Threats often involve relatively unknown domains or IPs. However, the 2nd generation related domains, often relate back to known malicious sources.

OpenDNS & VirusTotal LookupDomainsFilter

Looks up domain reputation and threat information in VirusTotal and OpenDNS.
The filters uses a heuristic to determine what is suspicious. These can create false positives but usually a download from a domain marked as suspicious is a good lead.

ShadowServer & VirusTotal LookupHashesFilter

Looks up hashes with the VirusTotal and ShadowServer APIs. VirusTotal acts as a blacklist of known malicious hashes while ShadowServer acts as a whitelist of known good file hashes.

AnalyzeFilter – The One Filter to Rule Them All

AnalyzeFilter is Yelp’s one filter to rule them all. It chains all the previous filters into one monster analysis. The results, enhanced with blacklist info, threat APIs, related files and domains, and even pretty browser history is written to a new output file.
Then Very Readable Output Bot takes over and prints out an easy-to-digest, human-readable, nearly-English summary of what it found. It’s basically equivalent to running:
123456789101112131415161718
$ cat SlickApocalypse.json | \
python -m osxcollector.output_filters.find_domains | \
python -m osxcollector.output_filters.shadowserver.lookup_hashes | \
python -m osxcollector.output_filters.virustotal.lookup_hashes | \
python -m osxcollector.output_filters.find_blacklisted | \
python -m osxcollector.output_filters.related_files | \
python -m osxcollector.output_filters.opendns.related_domains | \
python -m osxcollector.output_filters.opendns.lookup_domains | \
python -m osxcollector.output_filters.virustotal.lookup_domains | \
python -m osxcollector.output_filters.chrome_history | \
python -m osxcollector.output_filters.firefox_history | \
tee analyze_SlickApocalypse.json | \
jq 'select(false == has("osxcollector_shadowserver")) |
select(has("osxcollector_vthash") or
has("osxcollector_vtdomain") or
has("osxcollector_opendns") or
has("osxcollector_blacklist") or
has("osxcollector_related"))'
view rawthe_one_filter.sh hosted with ❤ by GitHub
and then letting a wise-cracking analyst explain the results to you. The Very Readable Output Boteven suggests new values to add to your blacklists.
This thing is the real deal and our analysts don’t even look at OSXCollector output until after they’ve run the AnalyzeFilter.

Give It a Try

The code for OSXCollector is available on GitHub – https://github.com/Yelp/osxcollector. If you’d like to talk more about OS X disk forensics feel free to reach out to me on Twitter at @c0wl.


Fuente: http://engineeringblog.yelp.com/