niedziela, 30 grudnia 2012

Malware Response with Mandiant Redline


Redline is MANDIANT’s non-commercial tool for investigating hosts for any intrusion activity, and malware. It analyzes files and memory, making a threat assessment profile at the same time.  In just a few words great tool for live incident response, and data analysis. Instead of manual live data acquisition (manual approach is a bad idea anyway), we can use Redline Collector for volatile + non-volatile information collection – possible adding IOC-  and then quick analyze on investigator machine also automatically (with support from white lists and MRI). At the same time, results can be confirmed using Memoryze or Volatility, for more ‘controlled’ study. Below just quick description of Redline’s modules or features.

Rapid Triage:
Without need for installation and alteration to the system state, one .bat script can be run on the subject host. It audits all processes, driver, network connection. What is more Redline Triage can be used for file structure capture and memory dump. Great for live response for volatile data.

Reveals Hidden Malware:
Collector is capable of memory imaging and deep study. It works on the level of kernel, so malware(rootkits, hidden processes) that are present in the memory (possibly not on the disk) can be seen and become obvious.  

Guided analysis:
Below description taken from guide, presents the ideology of Redline, and purposes. The statement  “takes the guesswork out of the task and time allocation” is so true, and perfectly shows that the memory inspection or intrusion detection process is not an easy task.

MANDIANT Redline streamlines memory analysis by providing a proven workflow for analyzing malware based on relative priority. This takes the guesswork out of task and time allocation, allowing investigators to provide a focused response to the threats that matter most. Redline calculates a “Malware Risk Index” that highlights processes more likely to be worth investigating, and encourages users to follow investigative steps that suggest how to start. As users review more audits from clean and compromised systems, they build up the experience to recognize malicious activity more quickly.

Investigative Steps:
Another great knowledge goes from Madiant. It is obvious, but I really like when developers and IT Security pros recommend training and learning.

Redline can collect a daunting amount of raw information. The key to becoming an effective investigator is to review Redline data from a variety of “clean” and “compromised” systems. As you gain experience, you will learn to quickly identify suspicious patterns.

Malware Risk Index Scoring and MD5 Whitelisting:
Redline during deep analyze uses its rules and techniques to calculate MRI for each task in the capture memory dump/ live data. That MRI uses different colors to demonstrate risk. Of course there will be some false positives or false negatives, but Redline gives also an easy way for tuning. Furthermore, majority of processes are legitimate and standard. MANDIANT has made MD5 hashes for multiple OS’es  and its components. Whitelisting allows to filter out tons of information that are known and unaltered – definitely not interesting during intrusion detection process.



Indicator of Compromise (IOCs):
mandiant_ioc_finder.exe can be used to look for IOCs in the subject system. MAndiant has developed an open standard(xml) for defining and then sharing threat information.

Going well beyond static signature analysis, IOCs combine over 500 types of forensic evidence with grouping and logical operators to provide advanced threat detection capability.

We have several methods of data collection: Standard Collector/Comprehensive Collector/ IOS search collector. Each of them can be modified and later provide fast and predefined options for incident response. Memory image acquisition can be applied in all of them. The ‘Analyze Data” section provide several options. Simple we choose what we want to analyze (investigator workstation!) and what ‘external’ feeds should be added: do we have IOC report, or whitelist imported?


Last interesting option, nicely done:
"Analyze this Computer" option is offered only for training and demonstration purposes. It performs a robust acquisition and analysis of the local system, with options for saving the results. This is an great way to gain experience using Redline but is not recommended for investigations. For real-world use on multiple systems, follow the workflow for Using a Collector. It is important that Redline analysis be carried out on a clean and protected workstation: this is easily accomplished using the Collector to bring captured data from potentially compromised systems to a secure Redline workstation. Do not risk compromising your collected evidence!

Workflow 
Process delivered by Mandiant is rather straightforward : collect, import and investigate. While collecting and importing are very easy … investigation is also simple. We have several options when it comes for collecting (do not forget about IOC finder). Here we need to create new ‘case’ and only launch RunRedlineAudit.bat. on the compromised system. Please, remember about place for acquired data!  Once data has been collected from the host it must be imported and automatically saved as .mans database file.



Investigation
Typical investigation steps contain reviewing: MRI Scores, Network connections/ports, Memory Sections/DLLs, Handles, Hooks and drivers. Here, it is worthy to mention, that when analyzing DLLs, Whitelists should be used. Here we have also set of filters. What is more, we have a bunch of tools for memory image study and acquisition. Furthermore we can acquire a driver. Another great feature is The Timeline, which provides a time-ordered list of events (use TimeWrinkle and Time Crunch for filtering). To sum up, we have a great tool with set of awesome features. For sure, I will spend days getting to know better the Redline - as it is complex solution (adding MIR) connecting incident response, malware response, intrusion detection and dead analysis. I strongly recommend reading the 'investigation' part from guide, also the appendix about best practises.  

Source for manuals, whitepapers and software (+whitelist):

For more information about the IOC standard and IOC Finder, visit http://openioc.org/ and http://www.mandiant.com/resources/download/ioc-finder/




sobota, 29 grudnia 2012

Live Response with WFT


    The Windows Forensic Toolchest™ (WFT) is designed to provide a structured and repeatable automated Live Forensic Response, Incident Response, or Audit on a Windows system while collecting security-relevant information from the system. WFT is essentially a forensically enhanced batch processing shell capable of running other security tools and producing HTML based reports in a forensically sound manner. 
   This description taken form http://www.foolmoon.net/security/wft/ wouldn’t be written better, and perfectly tell us what is the WFT. In another words, it is a light shell-program with config file that can run security tools and do it automatically and prepare html report. It supports Windows NT/2K/XP/2K3/VISTA/WIN7 and is commercial. Good presentation of this tools can be found here.


As we can read from support : The tools included in the default configuration file do not make any  significant alterations of the system they are being run on.
This tool is a great framework for incident response, and intrusion detection. On the other hand can be used by administrator for problem handling .etc. What I would like to mention, is the fact of automatization. Very often we know what information we want to collect -  it can be pslist, fport, handles .etc – and every time we want that data collected. So this part or incident response is repeatable and can be achieved with success – with WFT. What is more the output is great designed, we have it in html and  in .txt mode. Developers did not forget about documentation. Every step taken by this soft is logged and this can be seen also in the standard output. Additionally, we can track all changes made be tools, and we are prompted that some extraction can take more or less time, and what alteration is being made (which .exes/.dlls are needed). Of course all activities and scripting are with  a sound methodology, also computing MD5/SHA1 checksums.  


In another presentation this sentence can be found : WFT should be run from a CD (or USB memory stick) to ensure the forensic integrity of the evidence it collects. In addition to the WFT binary, users will also need to copy any external programs it will be invoking to the CD / memory stick. Okay.. but what about remote forensics ? I would like to end here, next I will focus on remote methods of data acquisition, and how to do it, to keep sound methodology of data collection. 

środa, 12 grudnia 2012

Live Response vs. Memory Image analysis


It will be desirable to start with some non-technical statement, given by Malin, Casey and Aquilina - before continuing to more 'digital' part of this article, namely:
“When malware  is discovered on a system, the importance of organized methodology, sound analysis, steady documentation, and attention to evidence dynamics all outweigh the severity of any time pressure to investigate.”
Here,  I would like to focus on the evidence dynamics, data preservation and collection. Other aspects - mentioned previously - are quiet obvious and probably getting more and more intuitive over time. Malware and hackers are getting stronger, and more often use anti-forensics methods to cover their actions and delete fingerprints. Just to list several examples: encryption, packing, detection of virtual environments, halting when PEB detected .etc. What is more deleting log files, flushing caches, generating fake traffic also are common and used. Another definition, the evidence dynamics, I would describe as the way that information disappears. I think that the OOV theory and this concept can be safely combined together. When acquiring evidence from subject system, we know what volatile data have relatively short life span, and any activity done by us can alter this type of information. When carelessly collecting data, also this non-volatile can be distracted, and loose its credibility. Although, there are also ways in which alteration on the system can be minimized. Checking how forensics tools are working, what alteration are they causing, what registers are used and so on, can distinguish between  malware/intruder and investigator’s tools.
Having that in mind, there are several ways of live response to the incident. To remind you, we could do live response with toolkits and frameworks, acquire memory dump, or both – depending on the situation. But okay, you may ask – what should be first? Live or memory imaging?
The more tools and processes you run, the more alteration on the system you cause. This is obvious. The situation is getting more dangerous, when investigator does not know how his tools work. At the same time these tools can crash,  and overwrite digital evidence. Tragedy. On the other hand anti-forensics methods may be applied, or even rootkit used – in this situation using system or own tools may provide fake information. What is more, when collecting data on the subject system we can overwrite another segments of file system, overwriting potential evidence. Furthermore,  sometimes we do mistakes, so omissions during investigating may happen. All listed arguments can be concluded with:

Before collecting any volatile data on the subject system, first acquire a memory dump.

This concept is strongly supported in the modern literature. In my personal opinion, I can agree that firstly acquiring memory dump we are avoiding omissions, missteps, but not surely minimizing alteration on the victim machine. University of Concordia, and one of CERTs conducted tests, in which they tried to check which method (live or memory imaging) causes more alteration on the system. It seems that impact on the system was bigger when live response was made. More pages in the system have been altered. On the other hand the difference between this two approaches was getting smaller when the newer system have been taken into account. In other source I found that new generation of tools are making less fingerprints that in years before – respectively. In my humble opinion, much remains to be done in this particular field, and many test should be conducted. For that moment, I believe that memory imaging as the first call is the best action that can be taken during incident response. 

poniedziałek, 10 grudnia 2012

Memory Analysis Tools

One of the very first step during digital investigation is the  memory imaging. I will write shortly about why it is so important, and why not to launch live incident response instead. At the same time I would like to focus on IDIP model, as in my opinion is mature, and elegant. In this article, just briefly giving reference to awesome tools for post-mortem or dead analysis of memory dumps. These tools are free, well known and strongly supported : Volatility, Memoryze and PTFinder. 


Volatility supported by Volatile Systems is the most powerful in presented set. You can just visit web page and find tons of useful information, documentation and capabilities. What is more there is an awesome text:

The Volatility Framework demonstrates our commitment to and belief in the importance of open source digital investigation tools . Volatile Systems is committed to the belief that the technical procedures used to extract digital evidence should be open to peer analysis and review. We also believe this is in the best interest of the digital investigation community, as it helps increase the communal knowledge about systems we are forced to investigate. Similarly, we do not believe the availability of these tools should be restricted and therefore encourage people to modify, extend, and make derivative works, as permitted by the GPL.
On the other hand we have a product from Mandiant - Memoryze. The is a  memory forensic software "that helps incident responders find evil in live memory. " List of features can be found here


One more tool must be mentioned. This is PTFinder. Very often Volatility and PTFinder are compared together. What is more, all of listed tools, use other technology and approach, so it is advisable to know each of them and check how they are working on specific cases - how are they different (or complementary) and what metadata can be found by these tools.


More information about PTFinder can be checked on this page. In the future - hopefully - I will present some of conclusions, how to use these tools on specific memory dump. I would like to make some kind of comparison, just to know what are the strengths, weaknesses  and capabilities of mentioned frameworks. 

niedziela, 2 grudnia 2012

GCFIM model


I did mention that I am involved in incident response process development, constantly learning something new in this field. Possible, I presented how that process can be created basing on the approach introduced by Lockheed Martin. The idea is to gather as much information as it is possible before any process can be created. Checking latest researches it seems that there are five stages of maturity of the Incident Response Process. What can be said, is the fact that digital crime is getting to be even stronger, and is it hard to still think in this linear way. We need to develop new methodologies and choose how to react when the alarm raise. For years, researches and IT investigators have been developing new phases of investigation models and approaches. I think that all sub-processes such as forensics investigation, APT response, and basically intrusion detection must be very closely combined together, and together create incident response process. On the other hand that process should be constantly developed to fit and adapt to changing threats and new technologies. Another problem could be the amount of false-positives and sophisticated attacks.  But, at the same time when the crimes scene is becoming more and more ‘digital’ and switching its place from streets into the ‘net’, the need for digital investigation appeared. Incident response process should answer questions how to behave and what to do when security controls reports policy violations or breaches. Computer forensics investigation should perfectly fit into the process and have an opportunity to develop. The clue is, how to check if the incident is legitimate, and how to prove the relevance of evidence collected on the digital crime scene? Without the process of verification and forensics approach to the alarms no  incident response process would exist. I would like to present how the process of forensics investigation looks like and hopefully describe how it is applied into incident response process.
All of above and much more have been thought years ago, when the FBI Laboratory presented the very first model of computer forensics investigation.  It was in 1984 when Pollitt introduced Computer Forensic Investigative Process.

Computer Forensic Investigative Process

In the first phase, the investigator was asked to collect evidence – in forensically sound  manner of course – identify components in the evidence and transform them to be human-readable. Then analyst checked if any identified components represented a legitimate evidence. This happened in Evaluation phase. At the end of this process all collected evidence were presented in the court of law. I suppose that this model can be freely used at the beginning of any process creation. We have preparation, analysis and outputs.  Possibly applied into SIPOC thesis. Then, many years passed and many other models have been presented. Awesome job in this field was done by Carrier and Spafford, who presented IDIP (integrated digital investigation process). I think that IDIP was the very first mature models in this field. Carrier also introduced the definition of ‘digital crime scene’ . They described the pre-preparation phases, in which appropriate personnel training and tools should be prepared. What is more, they also strongly marked the importance of security controls and detection process. Another awesome hint, in IDIP model we can find sub-processes and, division into technical and non-technical data acquisition. This version was upgraded several years after and named EDIP.

Great work was done by the team Yunus Yusoff, Roslan Ismail and Zainnuddin Hassan. They were basing of other investigation modes, and grouped together some similarities of all methods and methodologies. They proposed generic investigation process – known as GCFIM (Generic Computer Forensic Investigation Model).

Generic Computer Forensic Investigation Model

I would like to shortly described phases in this model. The first one is obvious,  appropriate preparation must be taken into consideration before any other sub-process could take place. Tools and personnel should be trained and checked. Additional steps must be taken from management authorities to support CSIRT during investigation process (privileges, access, approvals). The second step – Acquisition and Preservation – steps contain sub-processes such as: identifying, collection, transporting, storing and preserving evidence( simply collect and prepare data for further analysis). The stages analysis and presentation are easily understood phases and naturally can be divided into more detailed sub- processes. The last stage is all about mitigation, resolution and reporting.  At the graph we can notice some kind of ‘sequence’, but what is more, there is a possibility that we can move backward during investigation and having ability to ask questions one more time.
The GCFIM model is a perfect generic model, that any CSIRT and forensics investigators can start with when creating own incident response processes. It seems that the main stream or flow of this model can be easily recognizable in incident response processes – which contains forensics investigation also.  Some kind of similarity between these two models is appearing.  One of them should be very general and apply specifically to all incidents (in physical world), where particular policies and people need to be engaged, and the second one more detailed and sophisticated taking place in digital crime scene.