Sunday, 11 October 2015

Modern Honey Network ArcSight content

Continuing from the previous post about installing our own Modern Honey Network... all that is needed now would be the content to make some use of the intelligence we get from it. The most simple scenario we can cover, is using the IP addresses caught on our honeypot to generate alerts when identified in other sensors of our network.

First things first, we need an Active List to store our data. This will be a Fields based Active List with the following fields:
Malicious IP, Type is Address and The sub-type is IP Address
Comments, String

Our active list should be set with a TTL of a week or so (up to you again) to allow it to be efficient and not keep an IP flagged dirty for ever. The comment section as you will see below will be populated with the sensor that triggered it in the honeypot (Dionaea, Conpot, Snort or whatever).

The Alerting rule needs to follow the following conditions:
( Type != Correlation AND Device Product !=  MHN AND InActiveList("IP-Address-WatchList"))

Actions could be up to you but for simplicity sake
On First Event:
Set Event Field Actions 
name = Detected communication to IP address on WatchList
priority = 7

The Processing rule on the other hand would look something like this:
( Type != Correlation AND Device Product =  MHN AND Not InActiveList("IP-Address-WatchList"))

Actions would have to be:
On First Event:
Add To Active List
Field: Attacker Address
Field: Device Custom Srting1
Resource: IP-Address-WatchList

And that's it, enjoy your fresh intel :) 

Modern Honey Network and ArcSight combo

The Modern Honey Network is a brilliant project that allows you to easily deploy honeypot sensors on your network. Since in my case we are working with a small lab behind NAT we will end up with a large forwarding table and some restriction on port options. The system I have built this on is an Ubuntu 14.04.3 server (64bit).

System install:
I have selected a mail server and SSH only in the installation, small detail, since we will be adding port 22 for the honeypot we need to change our config on /etc/ssh/sshd_config to Port 2222 or whatever you what (dont forget to service ssh restart afterwards)

Deploying is quite straight forward:
apt-get install git -y
cd /opt/ 
git clone 
cd mhn/scripts/ 

Everything should be fine at this point and Nginx should be up and running
/etc/init.d/nginx status

Check the rest of the services with:
/etc/init.d/supervisor status

And more detail:
supervisorctl status

Install the ArcSight connector script to output the data on /var/log/mhn/mhn-arcsight.log
cd /opt/mhn/scripts/ 

Install some needed libraries to install the SmartConnector on the box:
apt-get install lib32z1 lib32ncurses5 lib32bz2-1.0 lib32stdc++6 

Configure the Smart Connector to read a CEF File and push to the Manager:
cd /opt/ArcSightSmartConnectors/current/bin/./

Login with the credentials you created at http://<ip>:80 and deploy the following sensors (the ones that I managed to make work on the same box):

You might have noticed that there are some errors when you run supervisorctl now. Off to fix celery-worker:
supervisorctl status mhn-celery-worker 
mhn-celery-worker                FATAL      Exited too quickly (process log may have details) 
wordpot                          STOPPED    Oct 11 11:50 AM 

Lets see the errors:
cat /var/log/mhn/mhn-celery-worker.err 
IOError: [Errno 13] Permission denied: '/var/log/mhn/mhn.log'

This might not be best practice.. but it worked and since this a test VM.. I will accept the risk.
ls -all /var/log/mhn/mhn.log 
chmod 666 /var/log/mhn/mhn.log 
supervisorctl start mhn-celery-worker  
supervisorctl status mhn-celery-worker 
mhn-celery-worker                RUNNING    pid 25876, uptime 0:02:05

Change ports on wordpot:
sed -i '/PORT/s/80/81/g' /opt/wordpot/wordpot.conf 
supervisorctl start wordpot 
supervisorctl status wordpot 
wordpot                          RUNNING    pid 26029, uptime 0:00:07

Change ports on conpot
sed -i '/port/s/80/82/g' /opt/conpot/env/src/conpot/conpot/templates/default/http/http.xml 
supervisorctl restart conpot 

Now things should be looking better:
root@mhn:~# supervisorctl status 
conpot                           RUNNING    pid 4208, uptime 0:00:06 
dionaea                          RUNNING    pid 1174, uptime 0:38:54 
geoloc                           RUNNING    pid 1180, uptime 0:38:53 
honeymap                         RUNNING    pid 1199, uptime 0:38:53 
hpfeeds-broker                   RUNNING    pid 1177, uptime 0:38:54 
hpfeeds-logger-arcsight          RUNNING    pid 1173, uptime 0:38:54 
kippo                            RUNNING    pid 1185, uptime 0:38:53 
mhn-celery-beat                  RUNNING    pid 1172, uptime 0:38:54 
mhn-celery-worker                RUNNING    pid 1190, uptime 0:38:53 
mhn-collector                    RUNNING    pid 1192, uptime 0:38:53 
mhn-uwsgi                        RUNNING    pid 1187, uptime 0:38:53 
mnemosyne                        RUNNING    pid 1179, uptime 0:38:53 
snort                            RUNNING    pid 1186, uptime 0:38:53 
wordpot                          RUNNING    pid 2076, uptime 0:34:00 

Very simple and easy to hook up to ArcSight. All we need now is some content for the feed.. probably feeding the output in an active list for later consumption by other content or something like that... enough for now.


Tuesday, 8 September 2015

Dual boot Kali + Encrypted Windows

You have a system with Kali 2.0 (encrypted) and a Windows setup that needs to be made encrypted as well. Encrypting Kali is simple enough to google and so is applying Veracypt system wide encryption so .. moving on. The problem comes into play because both Veracrypt and Linux want to make use of the MBR for their boot sequence.. which cant happen since Veracrypt is not able to boot multi-boot Linux.

Give use of MBR to Veracrypt with the added capability to boot from a secondary partition for a separate OS (in this case /dev/sda3 which is my /boot).

Encrypt windows partition with Veracrypt and overwrite your linux MBR, boot from a Kali live cd/usb and:
cryptsetup luksOpen /dev/sda4 root
(volumes are inactive)
modprobe dm-modvgchange -aylvscan
Now we can proceed with fixing our boot sequence.

mount /dev/mapper/hermes-root /mnt/
mount /dev/mapper/hermes-home /mnt/home
mount /dev/sda3 /mnt/boot
for i in /sys /proc /run /dev ; do mount --bind "$i" "/mnt$i"; done
chroot /mnt
vi /etc/default/grub
add line to show:
save and exit
grub-install /dev/sda3
for i in /mnt/home /mnt/boot /mnt/sys /mnt/proc /mnt/run /mnt/dev /mnt ; do umount  $i ; done

Now you should only have Kali on your Grub2 menu which will only be accessible if you choose NOT to boot with Windows.

Thanks for reading :)

Sunday, 30 August 2015

Kali 1.1.0 upgrade to 2.0

Finally got some time post vacation/touring with work etc etc to upgrade my laptop with the latest major version. So first things first...
Pre-install, got myself to the latest update available:

apt-get update
apt-get dist-upgrade

(if you are so far back that you get an upgrade in linux kernel i woudl suggest bouncing your box before continuing for good measure)

The default /etc/apt/sources.list file points to the right repositories for the upgrade but the product has changed so we need to make some changes

vi /etc/apt/sources.list 
deb kali main non-free contrib 
deb-src kali main non-free contrib 
deb kali/updates main contrib non-free

You need to replace the above bold text with sana instead of kali.. Which will make it look like:

deb sana main non-free contrib
deb-src sana main non-free contrib
deb sana/updates main contrib non-free

Now lets install/upgrade:

apt-get update
apt-get dist-upgrade

That will take a while.. Dont leave though because it will ask questions when it starts installing. My recommendation would be to replace the old configs with the new ones from the package maintainer when asked.

Post install:
After we are done, if you have done it like I did, from a terminal inside the gui, I would recommend just typing reboot or init 6 rather than trying to reboot the box via the gui... It did not work for me (which makes sense since most of gnome should be replaced at that point with the new-gnome).

What also is missing is gnome-control-center (for some reason) just install it and things can be configured again through it.

apt-get install gnome-control-center
apt-get autoremove

Good news, after the box is bounced things work as they should in the new gui and most importantly... I don't have to re-do my custom key-bindings because it kept them! Awesome :)

One of the unfortunate side-effects is that the mapping of devices has changed... my custom /etc/fstab had an external disk mapped at


which became


Moreover, multi-monitors dont work like they used to, if you have multiple monitors the secondary is a single monitor now... unless you install this extension here.

New 4.0.4 kernel though.. VMware workstation does not appreciate that... There is a cure though (amazingly the same one for the 3.19 kernel)

apt-get install linux-headers-$(uname -r)
curl -o /tmp/vmnet-3.19.patch
cd /usr/lib/vmware/modules/source
tar -xf vmnet.tar 
patch -p0 -i /tmp/vmnet-3.19.patch 
tar -cf vmnet.tar vmnet-only
rm -r *-only /tmp/vmnet-3.19.patch
vmware-modconfig --console --install-all

Last clean-up would be to remove the 3.18 kernel that is still hanging around:
apt-get remove linux-image-3.18 
grub-install /dev/sda
All said and done the new look is good, yet seems a bit heavy on my poor laptop’s GFX card.

Thanks for reading :)

Thursday, 28 May 2015

Enumerator script update 3

Here we go again, more changes! So earlier today @digininja sent me a tweet about switching sslscan with tlssled... what a brilliant idea. Much like enum4linux and nbtscan so tlssled is a bash script that runs openssl and sslscan.

TLSSLed brings to the table added tests like a test for TLS v1.1 and v1.2 support (CVE-2011-3389 aka BEAST), a test to check for legacy renegotiation even when secure, a test if SSL/TLS renegotiation is enabled and much more. Full details available in the script itself (should be under /usr/bin/ in your Kali box or you can get the full detail adn the script itself from

Credit for the upgrade to @digininja and @raulsiles for making the script to begin with.

As usual new version available on github.

Many thanks


Tuesday, 26 May 2015

Enumerator script update 2

Yet another update to the previous post, I keep adding and fixing things on that enumerator script.. let see where its going to end! In order to address the lack of SSL enumeration in the previous versions I have added to the script:

  • nikto (for both http and https hosts)
  • sslscan 
Those two should be enough.. at least for an initial feel of the environment and the targets available. After that you can go crazy and hit your targets with dirbuster or webslayer or dirs3arch.. or whatever floats your boat really. I have left nikto pretty much with its most basic options... which kind of crosses over to more of a vulerability assessment field. Its much noisier from an IDS perspective so be warned.. at some point I will tune it down to exactly what I would like it to do.. 

Finally, I remembered to add --reason to my nmaps also -n (DNS enumeration is a different subject) and -vvv just so we get all the information we need from the initial scans.. no need for do-overs! 

Hope you find it useful, as usual latest version available in the same place... here

Thanks for reading


Sunday, 17 May 2015

Enumerator script update

In my previous post, it was explained how the Enumerator script works, what protocols it supports etc etc. Well I wanted to add HTTPS screenshots as well so I had a second look at it, which lead me to cutycapt's latest version. The --insecure option we need to make this work is only available there... so here we go:
apt-get install subversion libqt4-webkit libqt4-dev g++ -y
cd /opt
svn co svn:// .
cd CutyCapt
Now CutyCapt's latest version is in the right path and we are good to go. Minor other changes were made to the script to fix whatever else was wrong as well. Unfortunately nmap http scripts dont seem to work for https.. might have to do something about that later.. for the time being I have just added screenshots on your HTTPS (not that there is going to be a huge difference from an apache header on 80 and its equivalent on 443.. but thats rarely the cards we are dealt).

Latest version available in the same place... here

Thanks for reading


Saturday, 16 May 2015

Enumerator script

I have been contemplating putting something together to automate what can be automated from the initial phases of a penetration testing exercise .. and here it is. This is personal preference mostly, combining the few things I have picked up from here and there. Recomendations for improvements are very welcome. To its majority the script utilises nmap scripts with the odd addtion of external tools (most of them default with Kali) depending on the occasion.

What does it do?

  1. ARP and ping scan your range to id your live targets
  2. Scan the output of the above using the top 2000 ports from nmap for TCP... and since we want to finish this millenium.. top 10 UDP ports (change it if you must to your own peril)
  3. The scan in both occastions is grabbing versions and putting the output in the directory you told it to create in the begining neatly creating a directory structure according to target.
  4. Enumeration covers the following services (all outputs go in the relevant target dirs)
    1. SMTP - hitting 25,465 and 587 with smtp-enum-users.nse smtp-commands.nse and smtp-open-relay.nse
    2. SNMP - using metasploit it kicks off auxiliary/scanner/snmp/snmp_login without involving the database though, if it finds a match it will use snmpcheck to get further details on the host
    3. FTP - check for enabled anonymous access using ftp-anon.nse 
    4. Finger - finger nmap plugin to enumerate users if possible
    5. NFS - nmap scripts nfs-showmount and nfs-ls to identify anything on matching targets
    6. SMB - use nmap's smb-check-vulns.nse to pick up any easy targets and then kick off enum4linux 
    7. TFTP - nmap's tftp-enum.nse
    8. HTTP - for this part you need to have cutycapt installed since it will look for http ports and take a screenshot of the pages residing there, additionally it will run nmap's http-headers, http-methods, http-title, http-auth-finder and http-enum scripts.
I did add a primitive counter for the scanner since that is the longest loop the script runs.. so you know how far from the promise land you are!

Script available on my github here.

Thanks for reading,


Sunday, 26 April 2015

ArcSight Security-Onion and Snort combo

I was playing around with Arcsight Express building a small test lab and thought to intergrate my Security Onion box with it so I can feed in the Snort logs. The Security-Onion box has 2 NICs, eth0 for management and eth1 is SPANed to monitor the network traffic. ArcSight Express is not the latest (4.0) but at least I used the latest connector (7.x..x.something). The simplest way to do this is with a Syslog connector, I installed mine on the ArcSight box itself since its not going to have a massive load going through it.

First things first, Barnyard2 needs to send the right logs to syslog-ng so we need to edit the confic file that is used for the monitoring interface and commend out the original "output alert_syslog" entry.

vi /etc/nsm/seconion-eth1/barnyard2-1.conf
and add:
output alert_syslog: sensor_name seconion-eth1-1, local, LOG_LOCAL6 LOG_ALERT, operation_mode default
after we save and exit we need to apply the configuration by issuing:
nsm --sensor --restart --only-barnyard2
the next part is configuring syslog to send logs to our ArcSight box, so we need to edit the syslog-ng config and add a filter and a destination.
vi /etc/syslog-ng/syslog-ng.conf
and add in the appropiriate places (replace the xxx with your ArcSight box'es IP)
filter f_local6 { facility(local6); };
destination d_net { tcp("" port(514) log_fifo_size(1000)); };
log { source(s_syslog); filter(f_local6); destination(d_net); };
and apply the changes by restarting the service
service syslog-ng restart

At the ArcSight side, run the installer, and select "Syslog Deamon" as your connector type and leave port 514 and change protocol to TCP.The rest should be standard like any other connector installation.

The result should be:

More later :)

Friday, 6 February 2015

Managed SIEM/MSSP dos and don'ts

These are some pointers that would benefit you if you are thinking of going to an MSSP (Managed Security Service Provider) for a SIEM service (Security Information Event Management). The sooner you consider those points the better for your organisations security posture.

  • Define a security policy. If you don't have one or think that you don't need one, and still insist on having a SIEM MSSP, you probably want it for the nice flashing lights and graphs in which case you need to invest heavily on a year-long X-mas tree. It will cost less and thousands of professionals around the world will thank you for it. When you are done with the security policy share it with your MSSP. The rules they will use to protect your network will depend on it.
  • Don't ask "what other similar companies are doing?" if you do, it means that you haven't thought of point one seriously enough. For the romantics among us, companies are like snowflakes so... back to point 1. Each environment differs and should be given the right amount of attention to detail to optimize as best as possible. Security out-of-the-box DOES NOT WORK. 
  • Define your scope correctly even if it "kills" you. Positioning of sensors can be crucial your your project (architecture wise). Asset management is key here, you can't secure things you don't know you have! 
  • After you defined the scope... don't change it (or at least accept that changing it can have dramatic impact on your project delivery date). People that say that can hit a moving target are most likely lying to you.
  • You need to make sure you are logging the right stuff. When you don't have auditd enabled on your Linux boxes it wont matter if you are sending the syslog over to your SIEM... the rule wont fire because the TRIGGER is missing. Testing your rules is not bad, testing all of them is though, pick some in random... yet covering all technologies you want to on-board.
  • Select rules that are related to your technologies. Everything causes resource creep these days so if you don't have Oracle or MSSQL, don't select the Database specific rules.
  • Feed the right technologies to your MSSP. If you can afford to feed everything to your MSSP, do so, but if you have to pick between two sources select the one that will contain the most valuable data. Oracle database feed or Proxy feed? Proxy feed of course! NIDS and Proxy feeds are two of the most valuable resources an Incident Handler can have (Payloads included!).
  • Is there a third party involved in your network? You need to add that delay not just on your on-boarding phase, but on your tuning and most importantly on every incident's triage phase as well. Those relationships need to be carefully defined and managed. 
  • Be prepared, when on-boarding a particular technology take note on who controls it, who manages it and what steps are needed to implement changes to it, when tuning comes into play you want to be ready to react. You can minimize the tuning process by being ready to make decisions and acting on them asap.
  • Configure access appropriately. If you expect people to manage your firewall or extract data from your NIDS, don't just make them an account on the box, make sure they have VPN access to the box as well! 
  • Don't try to hide form your SIEM, network diagrams, device configuration (inline/span-tap) are a goldmine of information for analysts and handlers. Just make sure you don't give them War and Peace to read... be informative and to the point. Remember, you are trying to integrate with these people! 
  • Different MSSPs do different things for different companies. In your case what are they hired to do? Monitoring/detection/triage/incident response the big ones can do all but whats in your contract? If there a gap find it and eliminate it.
  • Make sure the incident contact procedure is clearly defined and tested. You need to cover different levels of alerts since you don't want an email or a phone call every time Bob miss-types his password (sorry to call you out like that Bob). Your process also needs to contain alternatives, yes when using an email you will wisely use a mailing alias you control and all team members will be notified... but what about phone calls? Availability of the people on that process needs to be taken into account so the "red" phone needs to rotate but also secondary and even tertiary numbers should be made available. 
  • How is your SLA standing? is it feasible or is it achieved by the usual ticketing system tricks? Most of all have realistic expectations if you want action within 15 mins you need to be quick enough to authorize it (would you be willing to let your MSSP have his finger on the trigger?) but also be willing to pay top dollar for it. Since you are paying top dollar for it have a process review process to make sure everything is as it should be as well.
  • Do not attempt to shorten the tuning period. Analysts need time to collect data and assess what is noise and what is useful, what should be tuned out and what should be left as is. Keep in mind that tuning should be an ongoing process and not something that is done over a 6-8 week period and then left as is. IDS Signatures, Firewall rules, and environments in general change over periods of time so you should keep on top of it on a permanent basis.
  • Do not ask the Analysts to start tuning your data before the on-boarding is completed. Tuning is a time consuming, borderline tedious process that is vital to your service and it can change from one day to another if you keep changing your network. Asking an analyst to start tuning while you are on-boarding is asking them to do the same work multiple times. 
  • Aim to on-board your technologies according to category. If you finish on-boarding all your windows or IDS sensors, the Analyst can focus on tuning individual technologies and probably save you some time.
  • You should know what reports you will be getting, what they contain and how they will be populated. Confirm the reports work and that they provide you with enough information to be actionable (Bob failed to login but from which IP address/MAC?) . If you are able to schedule your own reports make sure you do so at the rate you are able to consume them. No reason to be ruining daily reports if you can't afford to look at them every day!
  • Lastly do not fall into the EPS trap (Events Per Second), just because most MSSPs will be charging you accordingly (as they should since more EPS = more resources feeding in) it does not mean that you should be minimizing the event sources you are feeding in to your MSSP. What you ultimately care about is what comes out from the SIEM. Actionable alerts and meaningful reports. You should strive to configure and maintain devices that feed in the SIEM only what is valuable to the analysts.
That's all for now :)