Forensics Blog

Casos Forenses

Análisis y comentarios de casos reales.

Malware

Reversing, QuickScan, Análisis dinámico y estático.

Cybercrimen Digital

Actividades delictivas realizadas con la ayuda de herramientas informáticas.

Análisis forense dispositivos móviles

Se involucran la identificación, preservación, obtención, documentación y análisis de información de dispositivos móviles.

iT Forensics, Hacking, Crimen digital

Enfoque en artículos y documentación relacionada a cibercrimen.

Posts recientes

25 mar 2015

Owasp Mobile Security

OWASP Mobile Security es un proyecto que tiene la intención de brindar recursos necesarios para que las aplicaciones móviles sean más seguras. A través de este proyecto, el objetivo es clasificar los riesgos de seguridad móviles y proporcionar controles en el desarrollo para reducir su impacto y probabilidad de explotación.

El enfoque principal es en la capa de aplicación haciendo hincapié en las aplicaciones móviles desplegadas en los dispositivos de usuario final y en la infraestructura de servidor que comunican esas aplicaciones móviles, así como en la integración entre las aplicaciones, los servicios de autenticación remota y las características específicas de la plataforma en la nube.

OWASP Top 10 de los riesgos de seguridad móvil (2014)

  1. Controles débiles del lado servidor: los agentes de amenaza incluyen cualquier entidad que actúa como una fuente de entrada no confiable hacia un servicio de backend, servicio web o una aplicación web tradicional. Ejemplos de tales entidades: malware, un usuario o una aplicación vulnerable en el dispositivo móvil.
  2. Almacenamiento inseguro de datos: los agentes de amenazas incluyen un adversario que ha alcanzado un dispositivo móvil perdido o robado, malware o cualquier aplicación que actúe en nombre del adversario y se ejecuta en el dispositivo móvil de la víctima.
  3. Protección insuficiente en la capa de datos: al diseñar una aplicación móvil, comúnmente se intercambian datos entre el cliente y el servidor. Cuando la solución transmite sus datos, debe atravesar una red portadora entre el dispositivo móvil e Internet. Los agentes de amenaza podrían explotar vulnerabilidades para interceptar datos sensibles mientras estos están viajando a través de la red: un adversario comparte la red local comprometida o vigilada (por ejemplo el Wi-Fi), se controla la red de dispositivos (routers, torres celulares, proxy, etc.), o existe un malware en el dispositivo móvil.
  4. Fuga de datos accidental: esta vulnerabilidad incluye malware móvil, versiones modificadas de aplicaciones legítimas o un adversario que tiene acceso físico al dispositivo móvil de la víctima.
  5. Autenticación y autorización pobre: los ataques que aprovechan las vulnerabilidades de autenticación suelen hacerse a través de herramientas automatizadas ya disponibles o desarrolladas a medida.
  6. Uso de criptografía débil: los agentes de amenazas incluyen cualquier persona con acceso físico a los datos que han sido cifrados incorrectamente, o a malware móvil actuando en nombre del adversario.
  7. Inyección del lado del cliente: se considera que alguien puede enviar datos no confiables (por ej. scripts) a la aplicación móvil y mediante ellos tomar el control del dispositivo.
  8. Decisiones de seguridad a través de entradas no confiables: incluye cualquier tipo de entidad que pueda enviar entradas no confiables a métodos sensible de la aplicación móvil para que la misma realice acciones no deseadas.
  9. Manejo incorrecto de sesiones: cualquier persona o cualquier aplicación móvil con acceso al tráfico HTTP/S, datos de cookies, etc. puede manipular una sesión establecida del usuario o establecer una nueva en su nombre.
  10. Falta de protección en binarios: típicamente, un adversario podrá analizar y realizar ingeniería inversa del código de una aplicación móvil, luego modificarlo para realizar algunas funciones ocultas.
Empleando una solución de análisis de vulnerabilidades se puede automatizar algunas de las pruebas de seguridad. Sin embargo, un problema común de estas evaluaciones incluyen demasiados falsos positivos y entonces los programadores y analistas de seguridad pueden pasar por alto los verdaderos riesgos.

Es imperativo que se considere un análisis de vulnerabilidades completo basado en el Top 10 y con una profunda investigación que ayudará a entrenar a los desarrolladores en las mejores prácticas de seguridad así como a mejorar el tiempo de desarrollo y su productividad.


Fuente: Segurinfo

4 mar 2015

Recursos varios

Tools Data Recovery:
http://www.forensicswiki.org/wiki/Tools:Data_Recovery

Data Recovery Blog
https://www.inforecovery.com/blog/what-is-forensic-data-recovery

Android Forensics, Part 1: How we recovered
https://blog.avast.com/2014/07/09/android-foreniscs-pt-2-how-we-recovered-erased-data/

21 Popular Forensics Tools
http://resources.infosecinstitute.com/computer-forensics-tools/

Deft
http://www.deftlinux.net/

Top 20 Free Digital Forensic Investigation Tools for SysAdmins
http://www.gfi.com/blog/top-20-free-digital-forensic-investigation-tools-for-sysadmins/

Six Mobile Forensic Tools
http://www.concise-courses.com/security/mobile-forensics-tools/

FREE Computer Forensic Tools
https://forensiccontrol.com/resources/free-software/

3 mar 2015

How do you "do" analysis? (Article)


Everybody remembers "The Matrix", right?  So, you're probably wondering what the image to the right has to do with this article, particularly given the title.  Well, that's easy...this post is about employing various data sources and analysis techniques, and pivoting in order to add context and achieve a greater level of detail in your analysis.  Sticking with just one analysis technique or process, much like simply trying to walk straight through the building lobby to rescue Morpheus, would not have worked.  In order to succeed, Neo and Trinity had to pivot and mutually support each other in order to achieve their collective goal.  So...quite the metaphor for a blog post that involves pivoting, eh?

Timeline Analysis
Timeline analysis is a great technique for answering a wide range of questions.  For malware infections and compromises, timeline analysis can provide the necessary context to illustrate things like the initial infection (or compromise) vector, the window of compromise (i.e., based on when the system was really infected or compromised, if anti-forensics techniques were used), what actions may have been taken following the infection/compromise, the hours during which the intruder tends to operate, and other systems an intruder may have reached to (in the case of a compromise).

Let's say that I have an image of a system thought to be infected with malware.  All I know at this point is that a NIDS alert identified the system as being infected with a particular malware variant based on C2 communications that were detected on the wire, so I can assume that the system must have been infected on or before the date and time that the alert was generated.  Let's also say that based on the NIDS alert, we know that the malware (at least, some variants of it) persists via a Windows service.  Given this little bit of information, here's an analysis process that I might follow, including pivot points:
  1. Load the timeline into Notepad++, scroll all the way to the bottom, and do a search (going up from the bottom) to look for "Service Control Manager/7045" records.
  2. Locate the file referenced by the event record by searching for it in the timeline.  PIVOT to the MFT: parse the MFT, extract the parsed record contents for the file in question in order to determine if there was any time stomping involved.
  3. PIVOT within the timeline; start by looking "near" when the malware file was first created on the system to determine what other activity occurred prior to that event (i.e., what user was logged in, were there indications of web browsing activity, was the user checking their email, etc.)
  4. PIVOT to the file itself: parse the PE headers to get things like compile time, section names, section sizes, strings embedded in the file, etc.  These can all provide greater insight into the file itself.  Extract the malware file and any supporting files (DLLs, etc.) for analysis.
  5. If the malware makes use of DLL side loading, note the persistent application name, in relation to applications used on the system, as well as within the rest of the infrastructure.  
  6. If your timeline doesn't include AV log entries, and there are AV logs on the system, PIVOT to those in order to potentially get some additional detail or context.  Were there any previous attempts to install malware with the same or a similar name or location?  McAfee AV will flag on behaviors...was the malware installed from a Temp directory, or some other location?  
  7. If the system has a hibernation file that was created or modified after the system became infected, PIVOT to that file to conduct analysis regarding the malicious process.
  8. If the malware is known to utilize the WinInet API for off-system/C2 communications, see if the Local Service or Network Service profiles have a populated IE web history (location depends upon the version of Windows being examined). 
  9. If the system you're analyzing has Prefetch files available, were there any specific to the malware?  If so, PIVOT to those, parsing the modules and looking for anything unusual.  
Again, this is simply a notional analysis, meant to illustrate some steps that you could take during analysis.  Of course, it will all depend on the data that you have available, and the goals of your analysis.

Web Shell Analysis
Web shells are a lot of fun.  Most of us are familiar with web shells, at least to some extent, and recognize that there are a lot of different ways that a web shell can be crafted, based on the web server that's running (Apache, IIS, etc.), other applications and content management systems that are installed, etc.  Rather than going into detail regarding different types of web shells, I'll focus just on what an analyst might be looking for (or find) on a Windows server running the IIS web server.  CrowdStrike has a very good blog post that illustrates some web shell artifacts that you might find if an .aspx web shell is created on such a system.

In this example, let's say that you have received an image of a Windows system, running the IIS web server.  You've created a timeline and found artifacts similar to what's described in the CrowdStrike blog post, and now you're read to start pivoting in your analysis.

  1. You find indications of a web shell via timeline analysis; you now have a file name.
  2. PIVOT to the web server logs (if they're available), searching for requests for that page.  As a result of your search, you will know have (a) IP address(es) from where the requests originated, and (b) request contents illustrating the commands that the intruder ran via the web shell.
  3. Using the IP address(es) you found in step 2, PIVOT within the web server logs, this time using the class C or class B range for the IP address(es), to cast the net a bit wider.  This can give you additional information regarding the intruder's early attempts to fingerprint and compromise the web server, as you may find indications of web server vulnerability scans originating from the IP address range.  You may also find indications of additional activity originating from the IP address range(s).
  4. PIVOT back into your timeline, using the date/time stamps of the requests that you're seeing in the web server logs as pivot points, in order to see what events occurred on the systems as a result of requests that were sent via the web shell.  Of course, where the artifacts can be found may depend a great deal upon the type of web shell and the contents of the request.
  5. If tools were uploaded to the system and run, PIVOT to any available Prefetch files, and parse out the embedded strings that point to module loaded by the application, in order to see if there are any additional files that you should be looking to.
Once again, this is simply a notional example of how you might create and use pivot points in your analysis.  This sort of process works not just for web shells, but it's also very similar to the process I used on the IBM ISS ERS team when Chris and I were analyzing SQL injection attacks via IIS web servers; conceptually, there is a lot of overlap between the two types of attacks.

Additional Resources (Web Shells)
Security Disclosures blog post
Shell-Detector
Yara rules - 1aN0rmusLoki

Memory Analysis
This blog post from Contextis provides a very good example of pivoting during analysis; in this case, the primary data source for analysis was system memory in the form of a hibernation file.  The case stated with disk forensics, and a hit for a particular item was found in a crash dump file, and then the analyst pivoted to the hibernation file.

Adam did a great job with the analysis, and in writing up the post.  Given that this post started with disk forensics, some additional pivot points for the analysis are available:

  1. Pivoting within the memory dump, the analyst could have identified any mutex utilized by the malware.  
  2. Pivoting into a timeline, the analyst may have been able to identify when the service itself was first installed (i.e., "Service Control Manager" record with event ID 7045).
  3. Determining when the malicious service was installed can lead the analyst to the initial infection vector (IIV), and will be extremely valuable if the bad guys used anti-forensic techniques such as time stomping the malware files to try to obfuscate the creation date.
  4. Pivot to the MFT and extract records for the malicious DLL files, as well as the keystroke log file.  Many of us have seen malware that includes a keylogger component that will continually time stomp the keystroke log file as new key strokes are added to it.  

"Doing" Analysis
I received an interesting question a while back, asking for tips on how I "do analysis".  I got to thinking about it, and it made sense to add my thoughts to this blog post.

Most times, when I receive an image, I have some sort of artifact or indicator to work with..a file name or path, a date/time, perhaps a notice from AV that something was detected.  That is the reason why I'm looking at the image in the first place.  And as a result, producing a timeline is obviated by the questions I need to answer; that is to say, I do not create a timeline simply because I received an image.  Instead, I create a timeline because that's often the best way to address the goals of my exam.

When I do create a timeline, I most often have something to look for, to use as an initial starting or pivot point for my analysis.  Let's say that I have a file that I'm interested in; the client received a notification or alert, and that led them to determine that the system was infected.  As such, they want to know what the malware is, how it got on the system, and what may have occurred after the malware infected the system.  After creating the timeline, I can start by searching the timeline for the file listing.  I will usually look for other events "around" the times where I find the file listed...Windows Event Log records, Registry keys being created/modified, etc.

Knowing that most tools (TSK fls.exe, FTK Imager "Export Directory Listing..." functionality) used to populate a timeline will only retrieve the $STANDARD_INFORMATION attributes for the file, I will often extract and parse the $MFT, and then check to see if there are indications of the file being time stomped.  If it does appear that the file was time stomped, I will go into the timeline and look "near" the $FILE_NAME attribute time stamps for further indications of activity.

One of the things I use to help me with my analysis is that I will apply things I learned from previous engagements to my current analysis.  One of the ways I do this is to use the wevtx.bat tool to parse the Windows Event Logs that I've extracted from the image.  This batch file will first run MS's LogParser tool against the *.evtx files I'm interested in, and then parse the output into the appropriate timeline format, while incorporating header tags from the eventmap.txt event mapping file.  If you open the eventmap.txt file in Notepad (or any other editor) you'll see that it includes not only the mappings, but also URLs that are references for the tags.  So, if I have a timeline from a case where malware is suspected, I'll search for the "[MalDetect]" tag.  I do this even though most of the malware I see on a regular basis isn't detected by AV, because often times, AV will have detected previous malware infection attempts, or it will detect malicious software downloaded after the initial infection (credential dumping tools, etc.).

Note: This approach of extracting Windows Event Logs from an acquired image is necessitated by two factors.  First, I most often do not want all of the records from all of the logs.  On my Windows 7 Ultimate system, there are 141 *.evtx files.  Now, not all of them are populated, but most of them do not contain records that would do much more than fill up my timeline.  To avoid that, there are a list of less than a dozen *.evtx files that I will extract from an image and incorporate into a timeline.

Second, I often work without the benefit of a full image.  When assisting other analysts or clients, it's often too cumbersome to have a copy of the image produced and shipped, when it will take just a few minutes for them to send me an archive containing the *.evtx files of interest, and for me to return my findings.  This is not a "speed over accuracy" issue; instead, it's a Sniper Forensics approach that lets me get to the answers I need much quicker.

Another thing I do during timeline analysis is that I keep the image (if available) open in FTK Imager for easy pivoting, so that I can refer to file contents quickly.  Sometimes it's not so much that a file was modified, as much as it is what content was added to the file.  Other times, contents of batch files can lead to additional pivot points that need to be explored.

Several folks have asked me about doing timeline analysis when asked to "find bad stuff".  Like many of you reading this blog post, I do get those types of requests.  I have to remember that sometimes, "bad stuff" leaves a wake.  For example, there is malware that will create Registry keys (or values) that are not associated with persistence; while they do not lead directly to the malware itself (the persistence mechanism will usually point directly to the malware files), they do help in other ways.  One way is that the presence of the key (or value, as the case may be) lets us know that the malware is (or was) installed on the system.  This can be helpful with timeline analysis in general, but also during instances when the bad guy uses the malware to gain access to the system, dump credentials, and then comes back and removes the malware files and persistence mechanism (yeah, I've seen that happen more than a few times).

Another is that the LastWrite time of the key will tell us when the malware was installed.  Files can be time stomped, copied and moved around the file system, etc., all of which will have an effect on the time stamps recorded in the $MFT.  Depending on the $MFT record metadata alone can be misleading, but having additional artifacts (spurious Registry keys created/modified, Windows services installed and started, etc.) can do a great deal to increase our level of confidence in the file system metadata.

So, I like to collect all of those little telltale IOCs, so that when I do get a case of "find the bad stuff", I can check for those indicators quickly.  Do you know where I get the vast majority of the IOCs I use for my current analysis?  From all of my prior analysis.  Like I said earlier in this post, I take what I've learned from previous analysis and apply it to my current analysis, as appropriate.

Sometimes I get indicators from others.  For example, Jamie/@gleeda from Volatility shared with me (it's also in the book) that when the gsecdump credential theft tool is run to extract LSA secrets, the HKLM/Security/Policy/Secrets key LastWrite time is updated.  So I wrote a RegRipper plugin to extract the information and include it in a timeline (without including all of the LastWrite times from all of the keys in the Security hive, which just adds unnecessary volume to my timeline), and since then, I've used it often enough that I'm comfortable with the fidelity of the data.  This indicator serves as a great pivot point in a timeline.

A couple of things I generally don't do during analysis:
I don't include EVERYTHING into the timeline.  Some times, I don't have everything...I don't have access to the entire image.  Someone may send me a few files ($MFT, Registry hives, Windows Event Logs, etc.) because it's faster to do that than ship the image.  However, when I do have an image, I very often don't want everything, as getting everything can lead to a great deal of information being put into the timeline that simply adds noise.  For example, if I'm interested in remote access to a system, I generally do not include Windows Event Logs that focus on hardware monitoring events in my timeline.

I have a script that will parse the $MFT and display the $STANDARD_INFORMATION and $FILE_TIME metadata in a timeline...but I don't use it very often.  In fact, I can honestly say that after creating it, I haven't once used it during my own analysis.  If I'm concerned with time stomping, it's most often only for a handful of files, and I don't see that as a reason for doubling the size of my timeline and making it harder to analyze.  Instead, I will run a script that will display various metadata from each record, and then search the output for just the files that I'm interested in.

I don't color code my timeline.  I have been specifically asked about this...for me, with the analysis process I use, color coding doesn't add any value.  That doesn't mean that if it works for you, you shouldn't do it...not at all.  All I'm saying is that it doesn't add any significant value for me, nor does it facilitate my analysis.  What I do instead is start off with my text-based timeline (see ch. 7 of Windows Forensic Analysis) and I'll create an additional file for that system called "notes"; I'll copy-and-paste relevant extracts from the full timeline into the notes file, annotating various things along the way, such as adding links to relevant web sites, making notes of specific findings, etc.  All of this makes it much easier for me to write my final report, share findings with other team members, and consolidate my findings.

Fuente: Windosir

28 feb 2015

Lorenzo Martínez - COOKING AN APT IN THE PARANOID WAY - EKOPARTY 2014

Lorenzo Martinez - CSI Workshop Ekoparty 2014


26 feb 2015

Lorenzo Martínez - CSI MADRID -Workshop Ekoparty 2014

25 feb 2015

[Video] Basic Guide to Advanced Incident Response

23 feb 2015

Video - Técnicas de Adquisición Forenses en vivo

Existen muchas herramientas comerciales para sacar esta información en caliente, pero son costosas y no siempre logran su objetivo completamente. En este video se repasan algunos métodos útiles y elementos utilizados para adquirir rápidamente la evidencia digital y se comparten algunos scripts de automatización de código abierto para ayudar en el proceso de adquisición


9 feb 2015

Distintas herramientas para extracción y análisis de memoria en Linux

Como todos sabéis, en un análisis forense es tremendamente útil la obtención del volcado de memoria volátil y su posterior análisis, sobretodo porque muchos artefactos de malware usan funciones que no dejan datos en disco. 

Quizás la mayoría de herramientas de adquisición y análisis de memoria estaban orientadas a sistemas en Windows porque durante años ha sido el gran objetivo de los "códigos maliciosos". No obstante con el gran auge de Android y otros sistemas Linux/Unix, esta tendencia está cambiando y se hace necesario saber utilizar y tener a mano algunas de las siguientes herramientas:

1. Volatility Framework: quizás una de las más famosas colecciones de herramientas para la extración y el análisis de la memoria volatil (RAM). Sin embargo el soporte para Linux es todavía experimental: ver la página LinuxMemoryForensics en el Volatility wiki. (Licencia GNU GPL) 

2. Idetect (Linux): una vieja implementación para el análisis de la memoria en Linux.

3. LiME (Linux Memory Extractor): presentado en la ShmooCon 2012, es un módulo cargable para el kernel (LKM) y permite la adquisión de memoria incluso en Android.

4. Draugr: interesante herramienta que puede buscar símbolos del kernel (patrones en un fichero XML o con EXPORT_SYMBOL), procesos (información y secciones) (por la lista de enlaces del kernel o por fuerza bruta) y desensamblar/volcar la memoria.

5. Volatilitux: framework en Python equivalente a Volatility en Linux. Soporta arquitecturas ARM, x86 y x86 con PAE activado.

6. Memfetch: sencilla utilidad para volcar la memoria de procesos en ejecución o cuando se descubre una condición de fallo (SIGSEGV).

7. Crash de Red Hat: es una herramienta independiente para investigar tanto los sistemas en funcionamiento como los volcados de memoria del kernel hechos con lo paquetes de Red Hat netdump, diskdump o kdump. También se puede utilizar para el análisis forense de memoria.

8. Memgrep: sencilla utilidad para buscar/reemplazar/volcar memoria de procesos en ejecución y ficheros core.

9. Memdump: se puede utilizar para volcar la memoria del sistema al stream de salida, saltando los huecos de los mapas de la memoria. Por defecto vuelca el contenido de la memoria física (/dev/mem). Se distribuye bajo la Licencia Pública de IBM. 

10. Foriana: esta herramienta es útil para la extracción de información de procesos y listas de módulos desde una imagen de la RAM con la ayuda de las relaciones lógicas entre las estructuras del sistema operativo.

11. Forensic Analysis Toolkit (FATKit): un nuevo framework multiplataforma y modular diseñado para facilitar la extracción, análisis, agregación y visualización de datos forenses en varios niveles de abstracción y complejidad de datos.

12. The Linux Memory Forensic Acquisition (Second Look): esta herramienta es una solución comercial con un driver de crashing modificado y scripts para volcado.

13. http://valgrind.org/

14. http://www.porcupine.org/forensics/tct.html

Fuente: 
Hackplayers

7 feb 2015

E-mail Forensics in a Corporate Exchange Environment

While most e-mail investigations make use of 3rd-party tools to analyses Outlook data, this article series will explore a few basic methods on how a forensics investigator can gather and analyze data related to an e-mail investigation in an Exchange 2010, 2013 and/or Online environments using information provided by Exchange features or using MFCMapi.
If you would like to read the other parts in this article series please go to:

Introduction

E-mail is the most utilized form of communication for businesses and individuals nowadays, and a critical system for any organization. From meeting requests to the distribution of documents and general conversation, it is very hard, if not impossible, to find an organization of any size that does not rely on e-mail. A report from the market research firmRadicati Group, states that in 2011 there were 3.1 billion active e-mail accounts in the world (an increase of 5% over 2010). The report also noted that corporate employees sent and received 105 e-mails a day on average. Royal Pingdom, which monitors the Internet usage, stated that in 2010, 107 trillion e-mails were sent. That is 294 billion e-mails sent per day! With a quarter of the average worker’s day spent in reading and replying to e-mails, it is easy to see the importance of e-mail in today’s world.
Unfortunately, e-mail communication is often exposed to illegitimate uses due to mainly two inherent limitations:
  1. There is rarely no encryption at the sender end and/or integrity checks at the recipient end;
  2. The widely used e-mail protocol Simple Mail Transfer Protocol [SMTP] lacks a source authentication mechanism. Worse, the metadata in the header of an e-mail which contains information about the sender and the path which the message travelled can easily be forged.
Some common examples of these illegitimate uses are spam, phishing, cyber bullying, racial abuse, disclosure of confidential information, child pornography and sexual harassment. In the vast majority of these e-mail cybercrimes the tactics used vary from simple anonymity to impersonation and identity theft.
Although there have been many attempts into securing e-mail systems, most are still inadequately secured. Installing antiviruses, filters, firewalls and scanners is simply not enough to secure e-mail communications. Most companies have a good e-mail policy in place, but it is not enough to prevent users from breaching it and, as such, monitoring is put in place in case the need for investigation arises. However, in some cases all of this does not provide the information needed... This is why Forensic Analysis plays a major role by examining suspected e-mail accounts in an attempt to gather evidence to prosecute criminals in the court of law. To achieve this, a forensic investigator needs efficient tools and techniques to perform the analysis with a high degree of accuracy and in a timely fashion.
Businesses often depend on forensics analysis to prove their innocence in a lawsuit or to establish if a particular user disclosed private information for example. When someone or even the whole company is being investigated, it is imperative that all information is thoroughly analyzed as offenders will always use dubious methods in order to not get caught.

Scenario Information

To help exploring situations where users misuse an e-mail system and a forensics investigator is performing analysis on the system to determine what exactly happened, three fictional scenarios were created and used throughout this article:
ScenarioE-mail   SubjectOffender   Innocent?Notes
1 - DrinksDrinksYesVictim changed e-mail body in order to frame offender.
2 - LunchLunch?NoOffender sends inappropriate e-mail to victim.
3 - DinnerDinner TonightYesE-mail with inappropriate content sent to victim by hacker using SendAs permissions to impersonate Offender.
Table 1
Involved in these scenarios are three fictional characters whose names also categorize their role:
  • Offender – a user who sent an inappropriate e-mail to a work colleague (Victim);
  • Victim – in scenarios 2 and 3, this user received inappropriate e-mails. In scenario 1 she is actually the criminal pretending to be a victim;
  • Hacker – a user who managed to gain access to Offender’s mailbox and sent an inappropriate e-mail to Victim (could simply be co-worker).

Identification and Extraction of Data

The first steps in any e-mail investigation are to identify all the potential sources of information and how e-mail servers and clients are used in the organization. These servers are no longer just to send and receive simple messages. They have expanded into full databases, document repositories, contact and calendar managers with many other uses. Organizations use these powerful messaging servers to manage workflow, communicate with employees and customers, and to share data. A skilled e-mail forensic investigator will identify how the messaging system is being used far beyond e-mail, as an investigation often involves other items such as calendar appointments, for example.
Forensic analysis of a messaging system often produces significant information about users and the organization itself. Nowadays this is much more than simply looking at e-mail messages.

Exchange Analysis

Every Exchange forensic analysis should start on the Exchange system itself. If the required information is not available on Exchange, then a deeper analysis at the client side is typically performed.
Laptop, desktop and servers once played a supporting role in the corporate environment: shutting them down for traditional forensic imaging tended to have only a minor impact on the company. However, in today’s business environment, shutting down servers can have tremendously negative impacts on the company. In many instances, the company’s servers are not just supporting the business – they are the business. The availability of software tools and methodologies capable of preserving data from live, running servers means that it is no longer absolutely necessary to shut down a production e-mail server in order to preserve data from it. A good set of tools and a sound methodology allow investigators to strike a balance between the requirements for a forensically sound preservation process and the business imperative of minimizing impact on normal operations during the preservation process.
To preserve e-mail from a live Microsoft Exchange server, forensic investigators typically take one of several different approaches, depending on the characteristics of the misuse being investigated. Those approaches might include:
  • Exporting a copy of a mailbox from the server using the Microsoft Outlook e-mail client, the Exchange Management Shell or a specialized 3rd-party tool;
  • Obtaining a backup copy of the entire Exchange Server database from a properly created full backup of the server;
  • Temporarily bringing the Exchange database(s) offline to create a copy;
  • Using specialised software such as F-Response or EnCase Enterprise to access a live Exchange server over the network and copying either individual mailboxes or an entire Exchange database file.
Each approach has its advantages and disadvantages. When exporting a mailbox, some e-mail properties get updated with the date and time of the export, which in certain cases means the loss of important information as we shall see.
One of the most complete collections from an Exchange server is to collect a copy of the mailbox database files. The main advantage in this case is that the process preserves and collects all e-mail in the store for all users with accounts on the server. If during the course of the investigation it becomes apparent that new users should be added to the investigation, then those users’ mailboxes have already been preserved and collected.
Traditionally, the collection of these files from live servers would require shutting down e-mail server services for a period of time because files that are open for access by Exchange cannot typically be copied from the server. This temporary shutdown can have a negative impact on the company and the productivity of its employees. In some cases, a process like this is scheduled to be done out of hours or over a weekend to further minimize impact on the company.
Some 3rd-party software utilities can also be used to access the live Exchange server over the network and to preserve copies of the files comprising the information store.
Another approach to collecting mailbox database files is to collect a recent full backup of Exchange, if there is one. Once these files are preserved and collected, there are a number of 3rd-party utilities on the market today that can extract mailboxes from them, such as Kernel Exchange EDB Viewer or Kernel EDB to PST.
A different approach that is becoming more and more important, is to use features of Exchange to perform the investigation. Exchange has a number of features such as audit logs or In-Place Hold that help, amongst other purposes, the investigation of misuse by keeping a data intact and a detailed log of actions performed in the messaging system.

Conclusion

In the first part of this article series, we looked at the importance of e-mail and forensics investigation, the scenarios we will be using, and how data is often collected and preserved from an Exchange environment. In the next article we will start looking at extracting data using Exchange features.
If you would like to read the other parts in this article series please go to:


6 feb 2015

OSXCollector: Forensic Collection and Automated Analysis for OS X

Introducing OSXCollector

We use Macs a lot at Yelp, which means that we see our fair share of Mac-specific security alerts. Host based detectors will tell us about known malware infestations or weird new startup items. Network based detectors see potential C2 callouts or DNS requests to resolve suspicious domains. Sometimes our awesome employees just let us know, “I think I have like Stuxnet or conficker or something on my laptop.”
When alerts fire, our incident response team’s first goal is to “stop the bleeding” – to contain and then eradicate the threat. Next, we move to “root cause the alert” – figuring out exactly what happened and how we’ll prevent it in the future. One of our primary tools for root causing OS X alerts is OSXCollector.
OSXCollector is an open source forensic evidence collection and analysis toolkit for OS X. It was developed in-house at Yelp to automate the digital forensics and incident response (DFIR) our crack team of responders had been doing manually.

Performing Forensics Collection

The first step in DFIR is gathering information about what’s going on – forensic artifact collection if you like fancy terms. OSXCollector gathers information from plists, sqlite databases and the local filesystem then packages them in an easy to read and easier to parse JSON file.
osxcollector.py is a single Python file that runs without any dependencies on a standard OS X machine. This makes it really easy to run collection on any machine – no fussing with brew, pip, config files, or environment variables. Just copy the single file onto the machine and run it. sudo osxcollector.py is all it takes.
123
$ sudo osxcollector.py
Wrote 35394 lines.
Output in osxcollect-2014_12_21-08_49_39.tar.gz
view rawsample hosted with ❤ by GitHub

Details of Collection

The collector outputs a .tar.gz containing all the collected artifacts. The archive contains a JSON file with the majority of information. Additionally, a set of useful logs from the target system logs are included.
The collector gathers many different types of data including:
  • install history and file hashes for kernel extensions and installed applications
  • details on startup items including LaunchAgents, LaunchDaemons, ScriptingAdditions, and other login items
  • OS quarantine, the information OS X uses to show ‘Are you sure you wanna run this?’ when a user is trying to open a file downloaded from the internet
  • file hashes and source URL for downloaded files
  • a snapshot of browser history, cookies, extensions, and cached data for Chrome, Firefox, and Safari
  • user account details
  • email attachment hashes
The docs page on GitHub contains a more in depth description of collected data.

Performing Basic Forensic Analysis

Forensic analysis is a bit of an art and a bit of a science. Every analyst will see a bit of a different story when reading the output from OSXCollector – that’s part of what makes analysis fun.
Generally, collection is performed on a target machine because something is hinky: anti-virus found a file it doesn’t like, deep packet inspect observed a callout, endpoint monitoring noticed a new startup item, etc. The details of this initial alert – a file path, a timestamp, a hash, a domain, an IP, etc. – is enough to get going.
OSXCollector output is very easy to sort, filter, and search for manual forensic analysis. By mixing a bit of command-line-fu with some powerful tools like like grep and jq a lot of questions can be answered. Here’s just a few examples:
Get everything that happened around 11:35
1
$ cat INCIDENT32.json | grep '2014-01-01 11:3[2-8]'
view rawfind_by_time.sh hosted with ❤ by GitHub
Just the URLs from that time period
1
$ cat INCIDENT32.json | grep '2014-01-01 11:3[2-8]' | jq 'select(has("url"))|.url'
view rawfind_by_time_url.sh hosted with ❤ by GitHub
Just details on a single user
1
$ cat INCIDENT32.json | jq 'select(.osxcollector_username=="ivanlei")|.'
view rawfind_user.sh hosted with ❤ by GitHub

Performing Automated Analysis with OutputFilters

Output filters process and transform the output of OSXCollector. The goal of filters is to make it easy to analyze OSXCollector output. Each filter has a single purpose. They do one thing and they do it right.
For example, the FindDomainsFilter does just what it sounds like: it finds domain names within a JSON entry. The domains are added as a new key to the JSON entry. For example, given the input:
12345
{
"visit_time": "2014-10-16 09:44:57",
"title": "Pizza New York, NY",
"url": "http://www.yelp.com/search?find_desc=pizza&find_loc=NYC"
}
view rawfilter_input.json hosted with ❤ by GitHub
the FindDomainsFilter would add an osxcollector_domains key to the output:
123456
{
"visit_time": "2014-10-16 09:44:57",
"title": "Pizza New York, NY",
"url": "http://www.yelp.com/search?find_desc=pizza&find_loc=NYC",
"osxcollector_domains": ["yelp.com","www.yelp.com"]
}
view rawfilter_output.json hosted with ❤ by GitHub
This enhanced JSON entry can now be fed into additional OutputFilters that perform actions like matching domains against a blacklist or querying a passive DNS service for domain reputation information.

Basic Filters

FindDomainsFilter

Finds domain names in OSXCollector output and adds an osxcollector_domains key to JSON entries.

FindBlacklistedFilter

Compares data against user defined blacklists and adds an osxcollector_blacklist key to matching JSON entries.
Analysts should create blacklists for domains, file hashes, file names, and any known hinky stuff.

RelatedFilesFilter

Breaks an initial set of file paths into individual file and directory names and then greps for these terms. The RelatedFilesFilter is smart and ignores usernames and common terms like bin orLibrary.
This filter is great for figuring out how evil_invoice.pdf landed up on a machine. It’ll find browser history, quarantines, email messages, etc. related to a file.

ChromeHistoryFilter and FirefoxHistoryFilter

Builds a really nice browser history sorted in descending time order. The output is comparable to looking at the history tab in the browser but contains more info such as whether the URL was visited because of a direct user click or visited in a hidden iframe.

Threat API Filters

OSXCollector output typically has thousands of potential indicators of compromise like domains, urls, and file hashes. Most are benign; some indicate a serious threat. Sorting the wheat from the chaff is quite a challenge. Threat APIs like OpenDNS, VirusTotal, and ShadowServer use a mix confirmed intelligence information with heuristics to augment and classify indicators and help find the needle in the haystack.

OpenDNS RelatedDomainsFilter

Looks up an initial set of domains and IP with the OpenDNS Umbrella API and finds related domains. Threats often involve relatively unknown domains or IPs. However, the 2nd generation related domains, often relate back to known malicious sources.

OpenDNS & VirusTotal LookupDomainsFilter

Looks up domain reputation and threat information in VirusTotal and OpenDNS.
The filters uses a heuristic to determine what is suspicious. These can create false positives but usually a download from a domain marked as suspicious is a good lead.

ShadowServer & VirusTotal LookupHashesFilter

Looks up hashes with the VirusTotal and ShadowServer APIs. VirusTotal acts as a blacklist of known malicious hashes while ShadowServer acts as a whitelist of known good file hashes.

AnalyzeFilter – The One Filter to Rule Them All

AnalyzeFilter is Yelp’s one filter to rule them all. It chains all the previous filters into one monster analysis. The results, enhanced with blacklist info, threat APIs, related files and domains, and even pretty browser history is written to a new output file.
Then Very Readable Output Bot takes over and prints out an easy-to-digest, human-readable, nearly-English summary of what it found. It’s basically equivalent to running:
123456789101112131415161718
$ cat SlickApocalypse.json | \
python -m osxcollector.output_filters.find_domains | \
python -m osxcollector.output_filters.shadowserver.lookup_hashes | \
python -m osxcollector.output_filters.virustotal.lookup_hashes | \
python -m osxcollector.output_filters.find_blacklisted | \
python -m osxcollector.output_filters.related_files | \
python -m osxcollector.output_filters.opendns.related_domains | \
python -m osxcollector.output_filters.opendns.lookup_domains | \
python -m osxcollector.output_filters.virustotal.lookup_domains | \
python -m osxcollector.output_filters.chrome_history | \
python -m osxcollector.output_filters.firefox_history | \
tee analyze_SlickApocalypse.json | \
jq 'select(false == has("osxcollector_shadowserver")) |
select(has("osxcollector_vthash") or
has("osxcollector_vtdomain") or
has("osxcollector_opendns") or
has("osxcollector_blacklist") or
has("osxcollector_related"))'
view rawthe_one_filter.sh hosted with ❤ by GitHub
and then letting a wise-cracking analyst explain the results to you. The Very Readable Output Boteven suggests new values to add to your blacklists.
This thing is the real deal and our analysts don’t even look at OSXCollector output until after they’ve run the AnalyzeFilter.

Give It a Try

The code for OSXCollector is available on GitHub – https://github.com/Yelp/osxcollector. If you’d like to talk more about OS X disk forensics feel free to reach out to me on Twitter at @c0wl.


Fuente: http://engineeringblog.yelp.com/