środa, 28 listopada 2012

Tripwire



Another amazing information came to me. Possibly a little late, but I am pretty sure, that many IT pro’s may be not aware of this.  What is known and obvious, is that we are using SIEMs for several purposes but the most critical one is: incident detection. The problem arises when it comes to large networks, and infrastructures – there is an issue, how to separate legitimate alerts from false-positives? When it comes to large volumes of data, the problem is getting event bigger. The idea of “external feeds” or “trusted sources” I suppose is also recognisable not only by IT security guys. It says, that we are getting information not only from logs, and flows in our SIEM, but also from another data sources. As an example – we can be given with the list of privileged users that have an access to the particular host, or  alerts from consulting companies, that “watch” our company from outside. Other trusted sources can be change and configuration data. These, listed sources of information can add the context to events/flows correlation and focusing IT Security Monitoring teams only on relevant alerts. And here is the power of new approach. I would like also to write about Tripwire VIA solution – which leaded me to some thoughts. 



First of all, Tripwire company and its solutions ... are awesome. I like this kind of fascination.  Tripwire VIA platform contains three components: Enterprise (configuration management), Log Center( log – event management) and Data Mart. As Tripwire says:

“As security breaches continue to rise(...). Log collection, retention and reporting are an accepted best practice for security and mandatory requirements of most regulatory policies. For years, though, log management solutions have generated a lot of  noise without helping detect threats”

Here comes Tripwire. Deploying SIEM is not an easy task. What is more ,incident detection and SIEM maintenance is another hard task. Of course, this is not the end of problems. Another need is the fact, that SIEM should obtain the rule - “easy-to-use”. Tripwire says that have a bunch of solutions, for these issues.  One of the greatest things about Tripwire VIA solutions is its approach to file integrity. Tripwire Enterprise give a brief insight into system configuration and status (compliance). Then this information is easily correlated with Log Center and gives a context, relationship between simply logs activities and changes that occurred on subject system. This is perfect approach that attract me the most. Again, awesome. Of course Tripwire supports standard capabilities such as queries, event correlation, trending analysis and dashboarding. What is more the statement of “active data” is introduced, saying that the data can be easily accessed even if the information is older (and possibly stored "deeper"). On the other hand it supports detecting incidents, risk management and following system integrity. Furthermore, I saw the methodology of rules construction – another great feature! I cannot say if graphical creation of rules (by drag and drop) is mature or not. What I think is that the newest SIEMs are very complex and very often is hard to combine the rule you really want to deploy. There comes also the problems with optimization and documentation. So yes, the graphical rule creator is a great feature, and possibly will make the incident detection process simpler – maybe this is another layer of prevention from the system , to build more efficient and complex rules. Dashboarding is another subject, and hopefully I will cover it next time. But here, just a quick hint – Tripwire gives the opportunity to watch most relevant logs on the “event relationship diagram” presenting nodes – as hosts – and connections. It is always a great idea to watch logs or metadata in more “human” friendly way, than raw events in real time streaming mode (aka tcpdump).


Ending, just several words about Tripwire components and mechanism. As Tripwire is known – I suppose so – because of it Enterprise component – it stores the data, encrypt it and applies a checksum to ensure its integrity. In the database it can also stores the vulnerabilities data for further reference and correlation. Other features, such as alerting, multi-platform support and logsources are known. On the other hand Tripwire enterprise is integrated with ESM solution developed by Arcsight. 

In my personal opinion, system – configuration – file integrity combined with log and other metadata give great, great base of information for incident detection and intrusion detection. These changes can be anything – files, executables, configs and other. Any suspicious behaviour can be simply checked by rules and alerting, but it can be difficult in big environment(storage issues). When it comes for post mortem analysis (intrusion) or malware incident response, I think that this kind of data is perfectly showing what was going on. Of cource, there are great tools for tacking software/files behaviour, but imagine that this can be automatic and raise an alert anytime.

Additional helpful abbreviations : GRC solutions (Governance, Risk, and Compliance), FIM solutions (File Integrity Monitoring), SCM solutions (Security Configuration Management).

poniedziałek, 26 listopada 2012

Hacker's Challenge

Hacker's Challenge by Mike Schiffman is one of the best security workbook out there. The subtitle "Test your incident response skills using 20 scenarios" perfectly describes what is this book about. The book is divided into 2 parts. The first one contains 20 different incidents and case studies. We can read some kind of  information taken during "initial response", supported with logs, outputs .etc (in the printed version). What is more author interestingly write the history plot, give us information about the IT infrastructure, time frames and so on. After each scenario we are issued with several questions such as "How the intruder broke into?", "What is significant about some files" or "What vulnerability was used".


The second part of the book - the most interesting - are the answers or correct responses to the incidents. There we can find great set of definitions, explanation of logs and clues. Furthermore, author is also saying what prevention should be used to avoid such situations in the future, or information about mitigation. Personally I am finding a lot of fun exercising and exploring different kinds of incidents with this material. Hopefully there are several vulumes of the "Hacker's Challenge". Happy to recommend. 

niedziela, 25 listopada 2012

HBE analysis - part 1


Just to remind you, during digital investigation we are supposed to gather two types of documentation. The first one - not-technical – is taken during initial response. The investigator is looking for any information that can be taken on/near the crime scene. He or she is checking IT infrastructure, personnel and circumstances.  As an example, imagine that you make a call to your support saying that your app is not working properly. As usually they call back you the next day, informing that they need logs, screenshots or any information presenting in what circumstances your application was, when you faced problems. And it is too late. So, yes, the purpose of not-technical information is to take any data presenting when the incident happened, what was the IT infrastructure status, what personnel was involved and so on. Believe me or not, sometimes this information is the clue. Moving forward. On the other hand we have technical stuff, which can be divided into NBE and HBE. I am pretty sure that the first one was well presented before – and I will be back to it in the future – but now, I would like to focus on the Host Based Evidence. So what does mean HBE?


This is simply a collection of information gathered in Live incident response process. I can say that also memory-imaging (disk imaging) is part of live response – sure – one of its subtypes. Host based evidence is set of : logs, documents, files, records, network connections, routing tables, list of processes, libraries and so on. This is all that can be obtained from machine and not from network nodes that are nearby.

There are several ways of HBE acquisition :
a) Live Response (involves several approaches and also data acquisition for post-mortem  analysis)
b) Forensic Duplication (Digital Archeology and Malware Investigation)

In this article I would like to write how the Live Response(apart from the circumstances such us intrusion detection process, malware, or simply troubleshooting) looks like. What is more, there are several tools(commercial) that can perform live response on remote machine without any knowledge from the investigator. Here I want to present what information should be obtained from the subject host, and what tool will be helpful. I would like also to remind, that there are myriad of software that can be used, and appropriate training should be taken by investigator. I strongly recommend reading several books for data acquisition, computer forensics or even file-system investigation. Please, refer to the short table of contents:

         Prerequisites
         Soundness (mention anti-forensics)
         Good practices
         OOV theory
         Data Acquisition
         Why Live?
         Summarize

Everything you do during investigation must be planned and follow clear methodology. You don’t want to launch panic mode,  and find yourself somewhere between broken documentation and destroying the subject system. On higher levels of maturity during incident response, investigator is supposed to adjust his thinking and tools to achieve as much as it is possible. By saying achieving, I mean obtaining ‘correct’ information and follow clues in the process of problem detection. After reviewing all collected data or during this process investigator will find him (or with CSIRT) in situation where appropriate response strategy should be taken. Also if it is possible, other information can be checked and collected. Please refer to previously posted article. From this short paragraph, it becomes more visible that we need to find the plan, formula that we should follow during  investigation. I would strongly recommend finding additional information about incident response process from process modeling view, and check what I have written previously.

When we talk about live system and data acquisition we need to know, that everything we do on the system is changing itself somehow.  It is very hard task – but possible – to check what alternation is made by investigator when executing commands or launching programs. In literature we can find “forensic  soundness” statement, and following “anti-forensic” actions. Please also check what does mean “evidence dynamics”. Very interesting subject. Saying that, we can fluently move to OOV theory. This abbreviation can be simply resolved into Order Of Volatility introduced by Farmer and Venema. What it shows, is the theory, that every information is stored in operating system in different layers or storages as on the screenshot below. It presents how long “each information” is alive – what is the live expectancy of the data. It must be taken into consideration when acquiring any information from any OS.


And, please… Never – I mean never – forget about documentation. Build your documentation carefully, take notes, screenshots, files .etc and you will quickly notice how beneficial it is.


In the very newest literature you can find, that by automatizating the process of volatile data acquisition we can save time, avoid misunderstandings, failures and the soft will do the job for us! During data acquisition process itself, several things must be pointed out. Think about, what access to the subject machine you have(is is local or remote? What privileges do you have, or even further what are the infrastructure restrictions – what protocol can be used?). Then, how to establish trusted connection, and what tools can you use? Is it possible to transfer your software?  Lastly, how to persevere volatile evidence, how to get them and  what next?  

wtorek, 20 listopada 2012

ThreatSim

Guys, another awesome approach presented by ThreatSim company. There are tons of ways in which attacker can compromise any company or data. There are variety of sophisticated attacks, such as low-and-slow, several types of sabotage, espionage or  high-level "open" intrusions. Then, going through some kind of automated programs (or self-propagated code), such as trojans, viruses, rootkits, parasitic viruses we are facing the problem of massive or distributed attacks. We know about DoS, pharming, spam, spoofing and finally phishing. Guys from ThreatSim, know how to handle the last one : phishing.



When standard phishing involves mass-mailing, spear phishing is in rather small scale, but is better targeted. Here ThreatSim company have lots to say about. They believe (and they have documented that!), that educating end-user is always the  best step in prevention but not the last one.

ThreatSim is an immersive training and assessment solution that defends your organization from phishing attacks by teaching users how to identify and avoid phishing emails. Our SpearTraining method targets users with bad behavior by delivering training when they click on a simulated phishing email.ThreatSim enables you to run simulated phishing attacks against your employees on a regular basis in a safe and controlled manner. ThreatSim’s SpearTraining is delivered to the user when it will have the most impact: at the moment of simulated compromise.
They deliver a solution for personnel education and taking care of any vulnerabilities in browser and its plugin (Adobe, Java, Flash, SilverLight). What is great about them, they know that it is very easy for not-IT guy(or even for him - not intentionally) just to click on the delivered link or any other feed. It's obvious, and very often we can cautch ourselfs just doing so. ThreatSim made a step ahead, and is focusing also on the minutes after being compromised - how to behave after being owned, what to do, and how to reduce the impact ? They can answer these questions.

Educating users is critical and an integral weapon in the war against spear phishing. If traditional information security is putting out fires, SpearTraining with ThreatSim means teaching your people not to play with matches.(...)The attacker is human. The user is human.


To sum up, it is believed nowadays that massive-type attacks on human is the most effective way of compromising sensitive data in company. It seems that not only patching, tons of other layers of protection, and best security support can protect your company. The keys are ...  staff awareness and education. Keep an eye on it!

poniedziałek, 19 listopada 2012

SophosLabs

In the subject of software vendors, there are several products looking very good. One of them - Sophos - attracted me - because of its great documentation, articles and being awarded by Gartner as one of the best UTM solution.  


Founded in 1985 Sophos become a known  developer and vendor of security software and hardware. It is said to be a "Complete security form products that work better together", and includes : endpoint threat protection (client firewall, network access control, web filtering, application protection, and patch assesment). What is more it combines data, email, web and mobile protection. At the same time Sophos says that has the best support in this industry - introducing SophosLabs as the global security experts and supporting group. Please, check here for products

About SophosLabs:
"Our analysts cover every area of IT security with an integrated system tracking malware, vulnerabilities, intrusions, spam, legitimate applications, legitimate and infected websites, personal data formats and removable device types"
,for more info go here

Going back to  Unified threat management ,which stands for being a solution that combines and performs different task of firewall, such as network filtering, network intrusion prevention, AV, access control list, load balancing, and DLP - I find that Sophos is the Leader in this industry.

"The Leaders quadrant contains vendors at the forefront of making and selling UTM products that are built for midsize business requirements. The requirements necessary for leadership include a wide range of models to cover midsize business use cases, support for multiple features, and a management and reporting capability that's designed for ease of use. Vendors in this quadrant lead the market in offering new safeguarding features, and in enabling customers to deploy them inexpensively without significantly affecting the end-user experience or increasing staffing burdens. These vendors also have a good track record of avoiding vulnerabilities in their security products. Common characteristics include reliability, consistent throughput, and a product that's intuitive to manage and administer"
Found in Gartner recognizes us as a Leader in the 2012 Magic Quadrant for Unified Threat Management.


As I have mentioned before, Sophos produce very interesting materials on the net (check this). One of them in really old-school form, but anyway - still presenting very good and compressed knowledge. Thanks !! 

niedziela, 18 listopada 2012

NBE analysis


What need to be remembered is  that incident response appears when the standard, or automatic security controls fail. We have tons of network/security appliances in our companies, tons of false-positive alerts and very often sophisticated malware or attack occurs and shows where do we have a security gap.

NBE is the information that is gathered (or can be reactively) from network nodes such as routers, DNS servers, taps, hubs .etc. This is all packages that can be catched by any network device  supported with  standard formats. This is all information that can be found in network traffic. It allows confirming and determining whether incident occurred or not. In another words this is the type of data that is complementary to information obtained from live response – usually only this data is available. 




There are 4 types of NBE information that can be taken from standard network traffic.

       a)      Full content data
         Consists of packages payload ,the entire content can be read depending on the use of 
         encryption. 
        
       b)     Session data
        This in nothing else than the aggregation of packages. Odd traffic can be easily  
        noticeable. We get  information about protocol, source/destination IPs and ports. The best
        way to  check what  happened  after attack.  

      c)      Alert Data
        IDS is programmed to recognize patterns of bits, specific  sorts of malicious activity,
        and other traffic that matches any signs of violations.

      d)     Statistical Data
        What we get is simple summarize  of traffic obtained in specific time   
      frame. There are  values describing percent of protocols used, number of packages  
      transported and so on.  





How to analyze network based evidence in post-mortem approach ? I am listing and shortly describing tools that will help us achieving this goal.
     
Statistical Data

Tcpdstat is a simple and powerful tool for creating summarize and statistics for traffic found in tcpdump (or other sniffing tool supporting libpacp).

Output will give us the idea about - how big and how long trace do we have, what protocols have been used, how many packages transported and the sizes.

tcpdstat evidence.raw  > output /statistical_output.txt


 Alert Data

 Alert data are taken from Intrusion Detection System. It depends how are we obtaining NBE –   
 reactively or proactively – but here we gonna check how to use snort in batch mode.
 snort -c  /etc/snort/snort.conf -r /home/user/NBE/evidence.raw -b -l /var/log/snort/
 Using this command we are telling snort just to act as a standard IDS, but in this case as input we are using previously acquired traffic dump. Also /etc/snort/snort.conf need to be set up correctly and be filled with directory to Snort rules. As output we are getting two files :

a) Alerts
b) Snort log file - presenting packages in which snort found suspicious patterns

Documentation : http://www.snort.org/

      Session Data
     
Using tools such as argus or tcptrace we can recreate sessions established from raw traffic dump. The output will be simple and showing the time frame, protocol, source port, IP with destination, and state of connection or reply and bytes transported.

Recreating with Argus :

argus -d -r /home/user/NBE/evidence.raw -w evidence.argus

Next runnning argus client ra to view session data in text format:

    ra -a -c -n -r evidence.argus > captured_session.txt

 Please, check also the possibilities supported by tcpdump:

tcpdump -n -r evidence.raw > evidence_full_session.txt 

 To build the session data tcptrace can be used:
tcptrace -n -r evidence.raw  > session_data.txt

        Documentation and Download : http://www.tcptrace.org/http://www.qosient.com/argus/.




     Full Content Data

       In this part, we are checking what details are hidden in the raw packages – after following clues found in previous layers of NBE. Firstly, we can easily use tshark to convert raw traffic evidence to more readable and human friendly format :

      tshark -x -r evidence.raw > fullcontent.raw

       The output will show us from which communication is every package (session data available), presenting data in hexadecimal and ascii format. Here we are looking for “known” patterns or supporting information for our previously found clues.

       We can try to literally recreate full sessions from the raw file. Within session data, the only thing done with tcptrace was the list all session information. Here we gonna build the session one more time, and view all information transported between hosts.

       To maximize usefulness of this evidence we can filter what we are looking for. Imagine, that you had found some suspicious behavior on port  5454 or 2323. To recreate session use:

      tcpflow -r evidence.raw port 5454 or 2323

       As the output you will be given with 4 files, or 4 session. Examining them will give you any information that was transported between hosts.
   

       For other related tools please visit this page : http://www.tcpdump.org/related.html   

piątek, 16 listopada 2012

An intelligence-driven, threat-focused …






The article starts with saying what does mean APT.  This type of threat is becoming more and more serious. As we have developed groups of methods and barriers for self-propagating viruses and other "automatic threat" we are facing a problem when it comes for Incident response for  Advance Persistent  Threat.  

"Advances in infrastructure management tools have enabled best practices of enterprise-wide patching and hardening, reducing the most easily accessible vulnerabilities in networked services. Yet APT actors continually demonstrate the capability to compromise systems by using advanced tools, customized malware, and “zero-day" exploits that anti-virus and patching cannot detect or mitigate. Responses to APT intrusions require an evolution in analysis, process, and technology; it is possible to anticipate and mitigate future intrusions based on knowledge of the threat."

Intrusion Kill Chain or Intrusion Scenario is simply process that is deployed by intruder. The article is presenting a lot of concepts and statements, how to think about incident response process basing on case studies of attacks.  Lockheed Martin is showing that we need to know how the attack looks like to know how to defend  against  APT. This is obvious, but when we  look at this approach… we will find out  that there is a great knowledge behind countless shortcuts and processes.  Really awesome article.
Basing on fantastic  work made by Richard Beitlich, I am presenting one of intrusion scenario (simplified version):



To sum up and describe the graph we are going through these stages :

     1.    Reconnaissance : all activity of scanning, vulnerability scanning, checking connectivity, what 
          system are we talking with, versions, patches, infrastructure.
     2.    Exploitation
     3.    Reinforcement: methods of leveraging our privileges on the owned machine, using methods  
    and tools to create backdoor, and our  “place” on the victim machine.
     4.    Verification : checking if we have easy access to the attacked machine, are we brining any
          alarms etc. 
     5.    Cyber crime : in this stage the attacker is performing his plan (theft, vandalism .etc)

Another scenario is presented by United States Department of Defense and called “Kill chain”. 

Quickly summarizing:

      1.    Reconnaissance
      2.    Weaponization
      3.    Delivery
      4.    Exploitation
      5.    Installation
      6.    C2
      7.    Actions

The second model is more detailed and “exploit” stage is divided into more specialized levels. Nevertheless both approaches and models show how does an attack looks like and what are the purposes of each stage. It is worth mentioning, that during intrusion, the attacker may use several different IPs – for example every stage with other source.  At the same time, different protocols, tools or even systems might be used. What I think is, that there are a variety of methods and ways of prevention. Here  discussing  the attack from a hacker point of view, we are finding several stages where we can use different type of protection. On the other hand – for  investigator, there are multitude places, factors that must be checked during intrusion detection process.  Sure, the human factor is always crucial in attacking or preventing or even investigating the crime scene. This cannot be still called cyber crime, but rather fight between “bad and good” guys and their skills.

czwartek, 15 listopada 2012

Open Source Solution and Documentation?

Guys, just quick reference to AlienVault solution and its resources.  Looking for some documentation related with subject of building Incident Response Process I found out that there are awesome articles delivered by this product!





OSSIM, the Open Source SIEM developed by AlienVault is good looking product with bunch of tools and great features. It's developers say :


"OSSIM provides all of the capabilities that a security professional needs from a SIEM offering, event collection, normalization, correlation and incident response - but it also does far more."

Possibly, I will review this solution on my own - hopefully writing some kind of review - , but today only passing links to documentation. "SIEM for ITIL-Mature Incident Response" divided into two great parts gives really good background for Incident Response process development. This can be find here and here

I really encourage all IT Security Pro's to read and understand!