Sguil FAQ

From NSMWiki
Jump to: navigation, search



Who is responsible for this FAQ?

In the immediate sense, that would be me, David Bianco, the FAQ editor. I use sguil at work, and thought that maintaining this FAQ would be a good way to give something back to the sguil community.

In the larger sense, though, we are all responsible for the FAQ, especially now that it's hosted on NSMWiki. It won't be very good unless people submit their own questions, and perhaps answers.

Sguil Basics

What is Sguil?

Sguil (pronounced sgweel) is probably best described as an aggregation system for network security monitoring tools. It ties your IDS alerts into a database of TCP/IP sessions, full content packet logs and other information. When you've identified an alert that needs more investigation, the sguil client provides you with seamless access to the data you need to decide how to handle the situation. In other words, sguil simply ties together the outputs of various security monitoring tools into a single interface, providing you with the most information in the shortest amount of time. Sguil uses a database backend for most of its data, which allows you to perform SQL queries against several different types of security events.

For an overview of sguil philosophy, architecture and usage, have a look at my presentation on the subject, Open Source Network Security Monitoring with Sguil.

Can you describe the sguil architecture?

A sguil system is composed of a single sguil server and an arbitrary number of sguil network sensors. The sensors perform all the security monitoring tasks and feed information back to the server on a regular basis. The server coordinates this information, stores it in a database and communicates with sguil clients running on administrators' desktop machines. It can also issue requests for specific information from the sensors. In ASCII terms, this communication pattern looks like:

                                           |<---------> Sensor #1 
           Client  ==>  Server <=========> |<---------> Sensor #2
                                           |<---------> Sensor #3

Each sensor monitors a single network link (although you can have multiple sensors on one physical machine). They collect several different types of information:

  1. Snort monitors the link for security events, and logs them to a file on the local disk.
  2. Barnyard takes events from the snort log file and sends them to the sensor agent, which inserts them into database running on the sguil server in near real-time
  3. A separate instance of snort logs the full content of all network packets to the local disk (this typically requires a large separate data partition)
  4. SANCP records TCP/IP sessions and forwards them to the database on the sguil server
  5. The sguil agent also listens for commands from the sguil server. These commands are typically requests for packet data previously logged by Snort.

How is sguil different from Snort + ACID or Snort + BASE?

ACID & BASE are both web-based IDS alert management systems. They let you browse and search alerts, but don't offer very much in the way of data-mining that would allow you to answer questions like, "Was this an attack attempt or a false positive?", "Was the attempt successful?" or "What other machines did the attacker try to crack once he got into this one?". They rely on you to do the research necessary to determine the severity of the situation.

Sguil's design centers on providing convenient, quick access to a host of supporting information, which both saves you time and helps you make better decisions. Incidentally, because sguil uses a dedicated client instead of running through a web browser, you get a richer, more responsive user interface as well.

For a more complete answer to this question, read Richard Bejtlich's post to the snort-users email list.

Who wrote sguil?

Real working security analysts. We know your pain.

Bamm Visscher (AKA "qru" on IRC) originally created sguil, but now he and Steve Halligan (AKA "srh" on IRC) do most of the coding.

But the real beauty of Open Sources is that the entire sguil community contributes in one fashion or another, even if just by reporting bugs

Where can I find more information/documentation about sguil?

Start with the Sguil web page. There you'll find screenshots, a Flash demo, installation documentation and more. You should also check out NSMWiki, the official wiki of the Sguil project. It's got tons of user-contributed information about sguil, including this FAQ.

Once you've digested that, we highly recommend Richard Bejtlich's The Tao of Network Security Monitoring. This book is the best general reference for the Network Security Monitoring (NSM) methodology, and the sguil chapter is available for free from Richard's website.

Are there any sguil mailing lists?

Yes, there are three. The sguil-users list (mirror) is for general sguil questions, problems or comments. There's also a sguil-dev list (mirror) for developer discussions, and a sguil-cvs list (mirror) that tracks changes that have been checked into the source tree. The easiest way to subscribe to any of these lists is to use the subscription page.

What about IRC channels?

The sguil community uses IRC as a primary day-to-day communication method, and anyone is welcome to join in the discussions. If you have an IRC client, connect to, then join #snort-gui.

How can I try out sguil without installing it all?

Download and install just the sguil client from this page. Once you've got it running, point the client at the demonstration server at on port 7734. You'll need to enter a username and password, but you can just make up anything. The client will ask what sensor you want to monitor (select the one called "reset"), and then bring up the main console window for you to play around with. Once you're connected, check out the "User Msgs" tab in the lower left corner. On the demonstration server, this gateways to the IRC channel so you can ask questions, get help or have a developer-led tour of the system.

I heard sguil is about to be rewritten in perl/C/Java/ADA...

Yup, any day now. Honestly.

Seriously. Why Tcl?

qru says:

Um, cause I have a tcl book. :)

What is Network Security Monitoring (NSM)?

NSM is a monitoring methodology that collects several different types of complementary data and relies on them to make intelligent decisions about security events. The emphasis is on collecting as much data as is practical, then streamlining an analyst's access to the data to help them get what they need as quickly and easily as possible.

For a more detailed explanation, see Chapter 2 of The Tao of Network Security Monitoring.

Why is it called "sguil"?

I won't even begin to go into that, but here's the complete answer.

Installation and Configuration

Where do I get sguil?

Most people should use the stable released version, available on the Sourceforge download page.

Developers or fearless souls can try the CVS snapshot, which is often updated several times a day. You can fetch the latest code with the following command:

% cvs checkout sguil

Also look here for pointers to RPMs, ISOs and other ways to get sguil.

Should I use the stable version or the CVS development release?

CVS usually works, but you might find some lingering bugs from time to time. If you have a healthy tolerance for this sort of thing, try the CVS version. You'll always get the latest code, which often includes significant new features.

On the other hand, if you need stability more than bleeding edge functionality, you should definitely be using the stable releases.

What other software does sguil depend upon?

The following table summarizes the different tools that make up sguil and shows the different types of information they provide:

Tool Purpose
MySQL 4.x or 5.x Data storage and retrieval
Snort 2.x Intrusion detection alerts, scan detection, packet logging
Barnyard Decodes snort alerts and sends them to sguil
SANCP TCP/IP session records
Tcpflow Extract an ASCII dump of a given TCP session
p0f Operating system fingerprinting
tcpdump Extracts individual sessions from packet logs
Wireshark An excellent packet analysis tool (used to be called Ethereal)

This is really complicated. Is there an installation and configuration guide?

Yes, there are several. The official sguil installation document is here. Unfortunately, it skips a few steps, like the installation of SANCP and p0f. Richard Bejtlich maintains installation scripts for FreeBSD systems.

For Linux users, check out the Sguil on RedHat HOWTO. This includes a complete HOWTO for installing sguil on a Red Hat Enterprise Linux (or compatible) system, as well as an automated installation/configuration process. Even if you're using another OS, you'll probably still find most of this to be useful.

How can I tell if it's all working right?

There's no standardized test suite, but you can get a pretty good indication by using various client functions. First, make sure you can log in, monitor some sensors and actually see some alerts. That'll show you that the basic sguil database is working and the sguil server is responding to your requests.

Next, highlight an alert and turn on the "Show Packet Data" and "Show Rule" buttons. You should see the packet headers and payload displayed in the lower right hand pane, and the Snort rule that generated the alert in the line just above. If you get these, all the basic alerting data is working.

Next, you should check to see if you can generate ASCII transcripts from the full packet logs. Select one of the alerts that says where the "Pr" (protocol) column is listed as "6". This is a TCP session, and the only type of session from which you can create ASCII transcripts. Right-click in the "sid.cid" column and select "Transcript" from the context menu. A new window should appear, and eventually it should contain a copy of the payload data from that session. It's worthwhile at this point to also select "Ethereal" from the same context menu, just to make sure that you can not only get the transcript but also the raw packets when you need them.

Finally, make sure database queries are working. Using the same alert as before, right-click in the "Src IP" column and select Quick Query -> Query Event Table -> Query SrcIP from the context menu. This generates an SQL query looking for all Snort alerts that come from that same IP address. The client will bring up another window that allows you to modify the SQL query before running it, but in this case you want to run the query as is, so just click "Submit". You may have to wait a few minutes, but eventually you sguil should return at least one event (the event you originally chose), and perhaps others as well.

Following the instructions from the last paragraph, repeat this procedure for the Quick Query -> Query SANCP Table -> Query SrcIP context menu item as well. You should receive similar results.

If all these tests worked, your sguil installation is working normally. If any of the tests failed, refer to the Troubleshooting section of this FAQ for pointers, or ask for help on the mailing list or IRC channel.

How does sguil capture network session information?

Sguil uses two distinct methods to capture information about network sessions. The older method is to patch the source code to snort, and enable the stream4 preprocessor directive in your snort.conf file. This allows snort to log information about TCP sessions it sees into the sguil database. The advantage to this method is that it leverages the copy of snort you're already running. This works well, but is limited to logging only TCP sessions. UDP won't be shown at all.

The newer method of capturing session data is to use SANCP. SANCP runs in a separate process and logs TCP, UDP and ICMP, which gives you a much more complete picture of the traffic on your network.

You can run both at the same time if you like, but this can cause significant database performance loss, since it has to do twice as much work to insert both session sources into the database.

Should I use snort or SANCP to capture session information?

You are much better off running SANCP instead of the snort preprocessor method. For about the same amount of system overhead, SANCP gives you a far better picture of who's talking to who.

If I'm not capturing session information with Snort's preprocessor, do I still need to patch the Snort source?

With sguil 0.5.3, yes. With 0.6.0 and above, maybe not. It all depends on how you want to collect portscan data.

Sguil also comes with a patch that allows Snort's old-style portscan detector to dump information to the sensor agent, again for inclusion in the sguil database. This is a pretty good scan detector, but can take a lot of tuning in order to extract useful results. Newer versions of Snort include a more accurate detector called sfportscan. If you're using sfportscan, you don't need to apply the old patch, but if you prefer the old style detector, you still need the patch. See this question for more information on choosing a portscan detector.

How does sguil handle portscan detection?

Sguil uses Snort's builtin portscan detectors. There are two methods available, and you can use either or both.

The older method relies on Snort's portscan preprocessor to detect various types of scans. This preprocessor tracks attempted connections and tries to guess whether or not they constitute a scan. It usually requires a fair amount of tuning to get good results, but offers better backwards compatibility with earlier versions of Snort and sguil.

The newer method uses the sfportscan preprocessor, which debuted in Snort 2.3. This preprocessor tracks rejected connections, which offers greater accuracy with much less tuning. It also records more information about the scan itself, such as the number of hosts or ports contacted and the time range. Starting with sguil 0.6.0, this is the preferred method of detecting portscans.

How does sguil handle VLANs and 802.1Q tags?

Sguil works quite well in an 802.1Q environment. Some of the monitoring processes already support VLAN tags natively (e.g., snort and SANCP). Others require some patching (tcpflow, PADS). If you're using Richard Fifarek's RPM Repository then all that hard work is done for you, otherwise you'll need to apply the VLAN patches.

Using Sguil

How do I set up sguil to automatically categorize incoming alerts?

This is called "automatic categorization", or just "autocat" for short. Take a look at /etc/sguild/autocat.conf, which contains full instructions. Once you edit this file, you'll need to restart sguild in order for the changes to take effect.

NOTE: Be sure you are running sguild with the proper "-a" flag!

Can sguil page me when it sees a particular alert?

Thanks to Will Button (wbutton at skyhawks d0t com) for contributing the following:

Yes, using the file on the sguild server (for version 0.6  
and higher).  Note that the file is only read on init, and reread on 
HUP signals, so if you make changes to it, you'll need to restart 

Set-up is fairly straightforward, as the file is very well documented.

To activate:
	set SMTP_SERVER {your_mail_server} 
	set EMAIL_RCPT_TO ","
	set EMAIL_FROM ""

Modify your notification options to meet your needs:

	set EMAIL_CLASSES "successful-admin trojan-activity attempted-admin attempted-user"

Optionally, use the last two parameters, EMAIL_DISABLE_SIDS and 
EMAIL_ENABLE_SIDS to override any specific sids you'd like.

Restart sguild to complete.

By the way, the procedure for 0.5.3 and previous releases is very similar, except that the email configuration is included directly in the sguild.conf file instead.

How do I expire (purge) old information from the database?

Sguil 0.5.3 comes with a handy script for this, called archive_sguildb.tcl. Basic usage looks like this:

archive_sguildb.tcl -d 2004-12-27 -p 2004_12_27_ --dbname sguildb \
       --dbhost localhost --dbuser sguil --dbpass password --event \
	--session --sancp 

This command would expire all event, session and SANCP entries older than "2004-12-27", placing them in new tables called "2004_12_27_event", "2004_12_27_session" and "2004_12_27_sancp". You can drop these tables if you don't want the data, or you can keep them around in case you need to make historical queries. As long as you have the disk space to store them, these older tables do not affect the performance of queries running against the current tables.

After expiring old date, you should also run mysqlcheck to re-optimize the database, reindex and repair tables and to reclaim the space used by the expired data.

Be warned that expiring old data may take hours on a large database (especially the sessions and SANCP tables). This can temporarily lock tables in the db, which will interfere with queries and with insertions. The sensors will queue up their data and try again when the table is unlocked, but interactive use might suffer. It's probably best to run these overnight when no one is using the GUI.

For sguil 0.5.3, you might also want to try out David Bianco's sguil_age_db script, which is a wrapper for archive_sguildb.tcl. The script's advantage is that it doesn't require you to give an absolute date for the expiration time, and you can specify different thresholds for different tables. For example:

   sguil_age_db --event "-27 days" --session "-3 weeks" --sancp "-1 month"

This makes it a little more suitable for running out of cron.

Sguil 0.6.0 and above changes the database schema extensively, and the archive script is no longer necessary. This version uses MERGE tables to create "virtual tables" for events, SANCP records and other supporting information. The virtual tables are comprised of a number of individual tables, one for each day. The table names look something like "tablename_sensorname_date" (e.g., "sancp_externalnet_20051128", "event_finance_20051031" or "data_finance_20051031"). The sguil server creates the merged tables dynamically, so you'll find "event", "icmphdr", "tcphdr", "udphdr", "data" and "sancp" tables, along with all the individual daily tables that make up these merged tables.

Given this, if you want to get rid of old data, simply stop the sguil server, drop the daily tables you don't want, drop the merged tables, then restart the sguil server. Sguil will recreate the merged tables using the remaining data in the database.

Here is a handy bash shell script that will automate this process and also repairs any remaining tables to keep data corruption to a minumum:

#! /bin/bash



/sbin/service sguild stop

for TABLEPREFIX in "data" "event" "icmphdr" "sancp" "tcphdr" "udphdr"
	for TABLE in "${TABLES[@]}"
		TABLEDAY=`echo "$TABLE" | awk -F_ '{print($3)}'`
		if [ "$TABLEDAY" -lt "$KEEPDAY" ]
			then /usr/bin/mysql -u$DB_USER -p$DB_PASSWORD -BN -e "DROP TABLE $TABLE;" -D $DATABASE
			/usr/bin/mysql -u$DB_USER -p$DB_PASSWORD -BN -e "REPAIR TABLE $TABLE;" -D $DATABASE

/sbin/service sguild start

What commands are available in the "User Messages" window?

Most people probably don't realize this, but the client's User Messages window is good for more than just user-to-user chat. It also offers a few simple commands you can use to check the status of the sguil sensors and server. To use one of the commands, simply type it on a line by itself in the User Message tab.

Version 0.5.3 supports the following commands:

Command Purpose
agents Lists all the sensor agents connected to sguild. This is deprecated, but still supported.
healthcheck Like the "agents" command, but more comprehensive. It also checks each agent to make sure it is still actively responding to requests. Unlike the other commands, the output for this is displayed in the "System Messages" tab. This is also deprecated, since the 0.6.0 client now includes a handy "Sensor Status" panel.
sensors An alias for the "agents" command.
who List all users connected to sguild.

I'm not satisfied with the default packet logging subsystem. Are there any alternatives available?

Yes. Two alternatives have already been developed, based on [[DaemonLogger] and SANCP. See Packet Logging in Sguil for more information on this subsystem and the alternatives available.


Something's not working. Where do I start?

This is a very hard question, since sguil is pretty complex. Probably the best thing to do is to put some of the pieces into debug mode and see what messages you get. Sguild and the sensor_agent are the most likely places to start (see Putting sguild into debug mode and Putting sensor_agent into debug mode). If that doesn't help, you might want to drop by the IRC channel and ask for help.

Putting sguild into debug mode

Locate your sguild.conf file. This is usually /etc/sguild/sguild.conf, but the exact location may vary on your installation. Then edit the line that says "set DEBUG 0" to "set DEBUG 2" (debug level 1 is far less useful for troubleshooting) and restart sguild.

While you're editing sguild.conf, you might also want to make sure you have "set DAEMON 0" in the configuration as well. This will prevent sguild from forking into the background when it starts up, which is useful when you're running it in debug mode. If you have "set DAEMON 1" (or if you use the -D option when starting sguild), you'll also get debug output, but it will go into syslog.

Note: Versions of sguil prior to 0.5.3 will syslog your database password if DEBUG is 2 and sguil runs in daemon mode. Versions 0.5.3 and newer will replace it with "xxxxx" in the syslog, but will still show it if debug output is going to stdout.

Some of the messages may be confusing while others may be pretty self explanatory. Here is an attempt to explain - line by line - what the messages are saying. Here is the debug from an example 0.7.0 server

Putting sensor_agent into debug mode

This is just like putting sguild into debug mode, except you edit the sensor_agent conf file on the machine that runs the sensor you are debugging. This is usually /etc/sensor_agent.conf, but this may be different on your installation. Edit the line that says "set DEBUG 0" to read "set DEBUG 1" and restart sensor_agent.

At this point, you might also want to make sure you have "set DAEMON 0" in the configuration as well. This will prevent the sensor_agent process from forking into the background when it starts up, which is useful for debugging. If sensor_agent runs in daemon mode and debugging is turned on, the messages will go to syslog.

I'm seeing IPC problems with sguild, and data isn't being loaded into the database

Benjamin Smee ben at disciplina d0t net has this to say:

If you are experiencing problems with IPC or seemingly 
bizare problems with data not being loaded into your DB, then
chances are you have enabled threading in TCL or TCLX. You 
must DISABLE threading for sguild to work correctly due to 
the forking nature of how sguild is currently written.

Sguild versions 0.5.4 and above have been fixed to exit with a warning message if run inside a threaded Tcl interpreter. If you are using an older version of sguil, you may need to recompile Tcl in order to disable threading.

See this question to find out how to check whether threading is enabled in your version of Tcl.

Also this problem might be caused by incorrectly set permissions on various files and directories. Make sure that all involved users and parts of the NSM can read what they're supposed to read (ie. agents to read sensors' data). Also make sure that database user (user who's running the database - usually mysql user) can read temporary load directory (in NSMnow it is /nsm/server_data/server1/load/), not only sguil database user. If you're not sure, check every agent, check what user should it run as, try su to it, and try to access sensor's data.

I'm seeing "ERROR: some events may not have been updated. Event(s) may be missing from the DB" when I select events in the GUI. What's wrong?

You're probably using version 0.5.3 or earlier. Here's what qru has to say about this:

This bug is from some poor logic that I put in op_sguil. Alerts that  
don't have an IP header, or aren't TCP, UDP, or ICMP aren't put into 
the database.  So, when the RT alert comes into the console, there 
isn't nothing to update in the DB. The problem will be fixed in the 
next release.

Barnyard dies at startup, with "Duplicate Entry" error

In sguil < 0.5.3...

When barnyard exits unexpectedly, it sometimes doesn't finish updating the database properly. If this happens, you'll see an error message like the following when barnyard tries to start again:

Error (Duplicate entry '1-270' for key 1) executing query: INSERT INTO data(sid,
cid, data_payload) VALUES('1', '270', '<Long String of Numbers>') 

Fatal Error, Quitting..


If this happens, you have two choices. You can either delete the partial entry from the data table, or you can try to create a dummy event entry to go along with it. Which one you choose is up to you, though in most cases you should probably just delete the entry.

No matter which way you choose, you'll need to do know the sensor ID (sid) and count ID (cid) for the bad entry. In the example message above, you can see these on the first line, where it says "Duplicate entry '1-270' for key..." The sid is 1, and the cid is "270".

To delete the incomplete entry, use the following SQL query, replacing the sid and cid values with numbers from your own message:

DELETE FROM data WHERE sid=1 AND cid=270;

To add a dummy entry, use this query instead:

INSERT INTO event (sid, cid, signature, signature_gen, signature_id, 
signature_rev, timestamp, status) VALUES (1, 271, 'foo', 99999, 999999999, 
99999, '2005-01-03', 1);

Again, you'll need to replace the sid and cid (first two entries in the VALUES list) with the numbers from your own error message. You might also want to replace the date "2005-01-03" with the current date, but that's not as critical.

In sguil > 0.6.0...

Simply restart barnyard for the sensor in question

/etc/init.d/barnyard-<sensor-name> restart

Barnyard dies with "Invalid packet length" error

Snort writes the unified output files as big binary blobs, and sometimes it crashes (or is stopped) in the middle of a write. If this happens, there may be a partial record in the log file that's messing things up. You'll know this is the case if you see an error message like the following:

ERROR: Invalid packet length: 976577328 Read error Fatal Error

Unfortunately, since the unified format isn't well delimited, barnyard can't just skip to the next record and pick up where it left off, since it can't tell where one record ends and the next one begins. To recover from this state, delete the offending snort.log file and the waldo file (both in the Snort log directory) and restart barnyard.

If you're having trouble finding which file is corrupted, run barnyard in the foreground with the -v option to turn on debugging messages. The last spool file it opens will be the corrupt one.


If barnyard ALWAYS dies with this error, and you're running it on a 64-bit system, you probably need to apply a patch from

More info here: shows current disk usage is 8174539%! Help!

You're probably running under Solaris, where the output of the df command is not quite compatible with Linux. uses df to report free disk space, and expects the 5th field of the output to be the percentage of the disk that has been used. Standard Solaris df uses a wildly incompatible output format by default.

The fix is to edit to include the -k option to df. Find the line that says:

CUR_USE=`df $LOG_DIR | grep -v -i filesystem | awk '{print %5}' | tr -d %'

Change this to:

CUR_USE=`df -k $LOG_DIR | grep -v -i filesystem | awk '{print %5}' | tr -d %'

Barnyard says "No input plugin found".

When you start barnyard, you get the following error message:

ERROR: No input plugin found for magic: a1b2c3d4

This indicates that you've asked barnyard to read a PCAP file with a collection of raw IP traffic, not a Snort unified log file. This is confusing to some, since the default file names for the log_unified output plugin and the "tcpdump format" option (snort -b) are both "snort.log". To correct this error, first change the snort configuration to specify a different filename, like so:

output log_unified: filename snort.unified, limit 128

After you change the output filename, you'll also have to tell barnyard the new name with the -f parameter. This is done in the init file (/etc/init.d/barnyard-$SENSOR if you followed the Sguil_on_RedHat_HOWTO guide.) Below is an excerpt showing the changed portion of the file:

barnyard="/usr/local/bin/barnyard -c /etc/sguil/barnyard-$SENSOR.conf -g /usr/lo
cal/snortrules-$SENSOR/ -s /usr/local/snortrules-$SENSOR/ 
-f snort.unified -w $SNORTLOG/waldo.file -p /usr/local/snortrules-$SENSOR/classi
fication.config -X /var/run/sguil/barnyard-$ -L $SNORTLOG -a $SNORTLOG

Finally, the '-b' option must be removed from the snort command-line arguments, because otherwise the command-line argument will override the changes made to the snort.conf file. The '-b' option is in the log_packets-$ script, in the section show below:

    today=`date '+%Y-%m-%d'`
   if [ ! -d $LOG_DIR/$today ]; then
     mkdir $LOG_DIR/$today
     chmod 777 $LOG_DIR/$today
   if [ -n "$FILTER" ]; then
     eval exec $SNORT_PATH $OPTIONS -l $LOG_DIR/$today -b -i $INTERFACE $FILTER > /tmp/snort.log 2>&1 &
     eval exec $SNORT_PATH $OPTIONS -l $LOG_DIR/$today -b -i $INTERFACE > /tmp/snort.log 2>&1 &

While in debug mode, Barnyard throws "ERROR: No input plugin found for magic: 00000001"

Barnyard dies immediately when you try to start it. When you put barnyard in debug mode you get the above error in this context:

Opened spool file '/var/log/snort-sensorname/snort.log.1277152021'
ERROR: No input plugin found for magic: 00000001
Fatal Error, Quitting..

It's likely that the file mentioned just before the error could be corrupt or that another process created a file with the same prefix - regardless, barnyard is trying to read a file in a format it does not understand. The easiest thing to do in this case is to delete the file in question

rm /var/log/snort-sensorname/snort.log.1277152021

and restart barnyard again to see if it will carry on. If this happens on more than a few files you may have a more serious issue at play.

Sguild takes a long time to start up. What's wrong?

The most common reason for this is that you have a lot of uncategorized events in the database. Remember, sguil is not an alert browser. It assumes that an analyst will review every event and categorize it appropriately. When sguil starts up, it has to load in all the uncategorized events so that it can send them to the clients when they connect. If you haven't been categorizing things on a regular basis, the events will pile up and sguild will take a long time to start. If you keep up with the events every day, the startup time will be much more reasonable.

I'm seeing error code 24 from MySQL. How do I fix that?

Error code 24 means that the MySQL database has hit its maximum number of simultaneous open files. This can happen more often with sguil 0.6.0 because the use of MERGE tables increases the number of files the database has to open for large queries. You can usually fix this by adding/editing the following line in /etc/my.conf:

open_files_limit = 4096

You can also add the following command-line argument to your startup script, such as /usr/local/etc/rc.d/mysql-server or


If you keep a lot of history in your event or sancp tables, you may even need to increase this to a larger number. If you have an OS-imposed limit on the number of open files, you should also make sure that it agrees with what you've listed in my.conf. In Linux you could issue: $ ulimit -a|grep "open files" Then you might need to edit you mysql startup script to include: ulimit -n 4096 (or more :)

This last parameter must be write at the very begining of the mysql startup script. If it is write at the last line of that script, mysql will start with the default ulimit (1024), and the problem will persist. (On Debian this file could be /etc/init.d/mysql, line number 21 works)

Sguild (loaderd) dies while trying to load SANCP data into the database.

Sguil uses the MySQL "LOAD DATA INFILE" syntax to load the SANCP data into the central database, and this requires special privileges above the normal INSERT/UPDATE privileges. You must also grant the FILE privilege, like so:

GRANT FILE ON *.* TO sguil@localhost;

Notice that you have to grant the permission on every database and every table. FILE is a global privilege, and cannot be granted for just a single table or database.

If this does not work, check for failed files in /var/nsm/load/failed (on *BSD). Run sguild with maximum debugging (sguild -d 2) and look for the failures. Incorrect permissions on /var/nsm/ and /var/nsm/load can lead to SANCP files not being inserted in to the database. Make sure both directories have 755 permissions.

Sguild complains about threading issues, then dies

If you're seeing a message like this:

ERROR: This version of tcl was compile with threading enabled. Sguil  is NOT
compatible with threading.
SGUILD: Exiting...

or maybe this:

Tcl_FinalizeNotifier: notifier pipe not initialized

it means that your version of Tcl is compiled with threading support. Sguil is not compatible with threaded versions of Tcl. To solve this problem, you either need to find a version of Tcl for your OS that is compiled without threads, or compile one yourself.

There have been reports of Fedora Core users using the FC1 version of the Tcl RPMs, even on versions up to and inlcuding FC5. Here are the links to the RPMs if you want to try them:

We've also heard that users of FC and Red Hat Enterprise Linux can also easily recompile the Red Hat-supplied source RPMs to remove threading support. This will fix the problem with Sguil, and you'll have a Tcl that is still very nearly the same as the one that came with your OS. Some basic instructions on this can be found here

sguil_logger ( dies when pcap file hits 2GB.

Most likely you need to compile libpcap and snort for large file support. This can be done like this:


# cd libpcap-0.9.7
# ./configure --prefix=/usr/local/libpcap-0.9.7-largefile \ 
# make 
# make shared
# make install
# make install-shared
# ln -s /usr/local/libpcap-0.9.7-largefile /usr/local/libpcap


# export PATH=$PATH:/usr/local/libnet/bin
# cd snort-
# ./configure --prefix=/usr/local/snort- \
 --with-libpcap-includes=/usr/local/libpcap/include \
 --with-libpcap-libraries=/usr/local/libpcap/lib \
 --with-libnet-includes=/usr/local/libnet/include \
 --with-libnet-libraries=/usr/local/libnet/lib \
 --enable-flexresp --enable-dynamicplugin
# make
# make install
#ln -s /usr/local/snort- /usr/local/snort

A sensor does not report the status of the snort agent to the sguil client's "Agent Status" window (Specifically - no data in the "Last" column).

Make sure you've correctly declared the "HOME_NET" variable in the snort.conf file for this sensor. If the declared network doesn't match the network being monitored you won't see any data for this snort agent on the client.

A sensor's snort agent process does not update on the sguil client (Specifically - an unchanging timestamp in the "Last" column).

Keep in mind that "Last" timestamp indicates the last time an alert fired. This is not an indication of the last traffic your sensor saw. Your sensor could be quite happy and working great and have a "Last" timestamp of days or weeks ago because it simply hasn't seen traffic that caused a rule to fire.

If, however, you purposefully generate traffic that you KNOW is a valid rule (think TFTP GET passwd) and this timestamp doesn't reflect that...''

Delete the sensor's "waldo.file" (usually found in /nsm/snort-logs/sensorname) and restart barnyard.

Sguil on the server will not start on 2nd boot - Error in /var/log/messages - "ERROR: loaderd: You appear to be using an old version of the sguil database schema that does not support the MERGE sancp table. Please see the CHANGES document for more information."

This seams to happen systems using newer versions of MySQL. I found the issue in 5.1.44.

In the sguil/server/lib directory, in the SguildMysqlMerge.tcl file you will find the following line:

if { $tableStatus != "" && [lindex $tableStatus 1] != "MRG_MyISAM" } {

Change it to:

if { $tableStatus != "" && [lindex $tableStatus 1] != "MRG_MYISAM" } {

Also, in the SguildLoaderd.tcl file replace any instances of MRG_MyISAM with MRG_MYISAM.

Then restart sguild.

Error when generating a transcript in the Sguil console and using indexed pcaps

If you get an error similar to this in the Debug Window when generating a transcript:

Fetching PCAP data...
Error fetching pcap: Unable to create output files, please provide one of the following: oprefix <outputfileprefix> (i.e. tcpdump.eth0. ) filename prefix for output files, if needed osuffix <outputfilesuffix> (i.e. .pcap or .raw) filename suffix for output files, if needed spliton <columnnumber> (i.e. 'spliton 4') column of values to be used as filenames for writing (associated) packets to, use with oprefix or osuffix optionschild killed: write on pipe with no readers

Most likely you're running an older (buggy) version of . Either get a newer version of the file or make this simple change in the code:

if(("$optionkey{oprefix}$optionkey{osuffix}"=="") && !$optionkey{spliton}) {


if(("$optionkey{oprefix}$optionkey{osuffix}"eq"") && !$optionkey{spliton}) {

No restart of anything needed.

SQL Questions

[Editor's Note: Many of the SQL queries in this section are based on those originally supplied to me by parrothd. Thanks for collecting these!]

What tables are in the database?

As of version 0.5.3, 15 tables come standard, but you usually only need to worry about 5 of them. They are:

Table Purpose
sensor Keeps track of all the sguil sensors (really the sensor_agent processes). There will be one row for each sensor_agent.
event This is where the snort alert data goes. It includes everything you see in the top two panes, but does not include portscan data or alert packet data, which is in a separate table.
pads Stores service detection information gathered by the PADS component of the Sguil sensor. (v0.7.0 and above only)
portscan This is where all the portscan alerts shown in pane 3 are stored. This only applies to alerts generated by the older-style spp_portscan snort preprocessor. Newer style alerts are generated by sfPortscan preprocessor, and can be found in the event table with all the other alerts. (v0.6.1 and below only)
sessions If you are capturing the snort-based network session records, this is where they are stored. This is obsolete now, as most Sguil installations are now using SANCP. (v0.6.1 and below only)
sancp If you are capturing network session records using SANCP, this is where they are stored.

Note: Sguil version 0.6.0 introduced the use of MERGE tables for the events and sancp tables. Sguil now breaks up these monolithic tables into daily tables for each sensor. This would normally make things very difficult to search, but that's where the MERGE tables come in. Sguil combines the daily tables into "virtual" tables, so you can still search many tables in a single query. The examples below still work, and hopefully will be faster with the new MERGE capability.

Here are some of the important elements of each table. Not every element is listed, but the ones below should be enough to get you through almost any situation:

'sensor' table description

Column Type Description
sid int The unique ID of this sensor
hostname varchar The sensor's name. This isn't really a hostname, since different sensors on the same hardware will have different values in this field. It's really an arbitrary name for human use
updated timestamp The time and date the sensor was first registered with the database

'event' table description

Column Type Description
sid int The unique ID of this sensor
cid int The sensor's unique event number
signature varchar The human readable name of the event (eg, "WEB-IIS view source via translate header")
timestamp datetime The time the event occurred on the sensor
status int The sguil classification assigned to this event. Unclassified events are priority 0.
src_ip int The event's source IP address. Use INET_NTOA() and INET_ATON() to convert to readable form, or convert a readable form to the database's integer representation.
dst_ip int The event's destination IP address
src_port int The source port of the packet that triggered the event
dst_port int The destination port of the packet that triggered the event
ip_proto int IP protocol type of the packet (6 = TCP, 17 = UDP, 1 = ICMP, but others are possible)

'pads' table description

Column Type Description
hostname varchar The sensor's name. This isn't really a hostname, since different sensors on the same hardware will have different values in this field. It's really an arbitrary name for human use
sid int The sensor ID reporting the session (see the 'sensor' table)
asset_id int A unique record identifier for each row in the database.
timestamp datetime The date and time at which a particular service was detected
ip int The encoded IP address of the server offering the service
service varchar The human-readable "short name" of the well-known service (e.g., ftp or http)
port int The port on which the service was detected
ip_proto tinyint The IP protocol used by the service (6 == TCP, 17 == UDP)
application varchar The human-readable name of the application, including the version number if it can be determined (e.g., Microsoft FTP Server 5.0)
hex_payload varchar The hex-encoded application banner detected by PADS

'portscan' table description

Column Type Description
hostname varchar The name of the sensor reporting the event (see the 'sensor' table)
timestamp datetime The time the portscan event was detected by the sensor
src_ip varchar The portscan event's source IP address
dst_ip varchar The portscan event's destination IP address.
src_port int The source port of the packet that triggered the event
dst_port int The destination port of the packet that triggered the event
data text Arbitrary information about this attempt provided by Snort

'sessions' table description

Column Type Description
sid int The sensor ID reporting the session (see the 'sensor' table)
xid int The sensor's unique session number
start_time datetime The session start time
end_time datetime The session end time
src_ip int The event's source IP address. Use INET_NTOA() and INET_ATON() to convert to readable form, or convert a readable form to the database's integer representation.
dst_ip int The event's destination IP address
src_port int The source port of the packet that triggered the event
dst_port int The destination port of the packet that triggered the event
ip_proto int IP protocol type of the packet (6 = TCP, 17 = UDP, 1 = ICMP, but others are possible)
src_pckts bigint Number of packets sent from source to dest
dst_pckts bigint Number of packets sent from dest to source
src_bytes bigint Number of bytes sent by the source host to the destination host
dst_bytes bigint Number of bytes sent by the destination host back to the source

'sancp' table description

Column Type Description
sid int The sensor ID reporting the session (see the 'sensor' table)
sancpid bigint The sensor's unique session number
start_time datetime The session start time
end_time datetime The session end time
duration int Length of the session in seconds
src_ip int The event's source IP address. Use INET_NTOA() and INET_ATON() to convert to readable form, or convert a readable form to the database's integer representation.
dst_ip int The event's destination IP address
src_port int The source port of the packet that triggered the event
dst_port int The destination port of the packet that triggered the event
ip_proto int IP protocol type of the packet (6 = TCP, 17 = UDP, 1 = ICMP, but others are possible)
src_pckts int Number of packets sent from source to dest
dst_pckts int Number of packets sent from dest to source
src_bytes int Number of bytes sent by the source host to the destination host
dst_bytes int Number of bytes sent by the destination host back to the source

Can I query the database directly?

Yes, there are three ways to do this. First, sguil has a built in database query tool. To access it, bring down the "Query" menu from the menubar, then select "Query Builder". From here, you can query the event, sessions or sancp table. By clicking the GUI elements, you can build a valid query for whichever table you are working with, or you can simply type your query into the text area. If you run queries from the Query Builder, they will show up in the sguil console and you will be able to right-click on the rows to perform all the common sguil actions.

Many advanced sguil users prefer to use the MySQL command line client for direct searches. You can start the client from the database host machine like so:

mysql -u <sguil user> -p <sguil_db_name>

For example, if your sguil user is "sguil", and your database name is "sguildb", you could use the following command:

mysql -u sguil -p sguildb

You'll be prompted for your database password, and once you enter it, you'll be presented with an SQL command line, from which you can run any of the examples in this section. Remember that when using the MySQL client, you must end each command with a semicolon (";") in order for it to be executed.

How can I count the number of events/sessions in the database?

To see the total number of events in the database, try the following:


If you want to know how many SANCP sessions sguil recorded, try:


If you like, you can also add a date range to either of the above queries, like so:

SELECT COUNT(*) FROM sancp WHERE start_time BETWEEN "2005-11-20" AND "2005-11-22";

"2005-11-20 04:03:00" AND "2005-11-20 05:03:00";

What if I want to count the number of events that haven't yet been categorized?

Simple, just select all events where the status field is 0 ("uncategorized"). It looks like this:

SELECT COUNT(*) from event WHERE status = 0;

How can I see the list of all the sensors in my database?

SELECT * FROM sensor;

How do you find the earliest event or session in the database?

If you just want to know the timestamp of the entry itself, you can use a query like the following:

SELECT MIN(timestamp) FROM event WHERE timestamp > "0";


SELECT MIN(start_time) FROM sancp WHERE start_time > "0";

If you want more information about the event itself, you need to do something a little more complex:

SELECT * FROM event ORDER BY timestamp LIMIT 1;


SELECT * FROM sancp ORDER BY start_time LIMIT 1;

What if I need to find the most recent event or session?

It's similar to finding the earliest event, you just sort the output in reverse order. If you want the timestamp of the entry, and not the entry itself, do this:

SELECT MAX(timestamp) FROM event;


SELECT MAX(start_time) FROM sancp;

If you need the actual entry, just add the "DESC" keyword to sort the output in reverse order:

SELECT * FROM event ORDER BY timestamp DESC LIMIT 1;


SELECT * FROM sancp ORDER BY start_time DESC LIMIT 1;

How can I find the number of sancp/event entries I've logged per day?

You are in luck. If you want information about how many sancp entries you've recorded for each day in a range of days, your query should look something like the following, after you replace "2005-01-01" and "2005-02-01" with the beginning and ending dates of the range you're interested in:

SELECT COUNT(*), DATE(start_time) as day from sancp WHERE start_time >= '2005-01-01' 
AND start_time < '2005-02-01' GROUP BY day;

This will produce a table that looks something like the following:

| COUNT(*) |         day  |
|      463 |   2005-01-14 |
|     1765 |   2005-01-15 |
|      218 |   2005-01-16 |
|   487662 |   2005-01-17 |
|  2158804 |   2005-01-18 |
|  2804460 |   2005-01-19 |
|  2010120 |   2005-01-20 |
|  2909722 |   2005-01-21 |
|  1591163 |   2005-01-22 |
|  1665594 |   2005-01-23 |
|  3351986 |   2005-01-24 |
|  2804676 |   2005-01-25 |

If you want the breakdown of events, you use a similar query, but replace "start_time" with "timestamp", like so:

SELECT COUNT(*), DATE(timestamp) as day from event WHERE 
timestamp >= '2005-01-01' AND timestamp < '2005-02-01' GROUP BY day;

Thanks to srh for providing the original query.

How can I restrict my queries to only inbound or outbound traffic?

Although the SQL code looks a little messy, this is actually pretty simple if you know your IP address range. Simply use the first address and the last address in the range. When used with virtually any other SANCP or event query, the following example would restrict your search to traffic entering your network:

[...] WHERE (dst_ip >= INET_ATON('IP_START') AND 
dst_ip <= INET_ATON('IP_END')) AND [...]

To measure traffic leaving your network, replace all occurrences of dst_ip with src_ip.

Here's a complete example for searching traffic leaving the network:

[...] WHERE (src_ip >= INET_ATON('') AND 
src_ip <= INET_ATON('')) AND [...]

How can I find which ports are generating the most traffic on my network?

The answer depends on whether you want to know how many connections are being generated or how many bytes are transferred, and whether you count source port (the port creating the traffic) or the destination port (the port receiving the traffic). The solutions are all similar, but the query changes a bit.

First, let's try a simple example. To create a "Top 10" list of ports sending traffic, sorted by the number of connections involved, try a query like the following:

SELECT src_port, count(*) AS cnt FROM sancp WHERE 
(start_time >= "2005-01-01 00:00:00" AND 
start_time < "2005-01-02 00:00:00") GROUP BY src_port 

This will produce a table like the following:

| src_port | cnt   |
|        0 | 76693 |
|      137 | 55219 |
|      138 | 44858 |
|      161 | 23318 |
|      123 | 17895 |
|       22 | 10982 |
|       80 |  9513 |
|     5190 |  8055 |
|      445 |  7818 |
|    32774 |  7434 |

If you'd rather sort the ports by the number of bytes transferred, you can do that as well. Normally, no matter whether you're working with source or destination ports, you care about the sum of all bytes sent and received over that connection. The following sample query will produce another "Top 10" list of ports sending or receiving traffic:

SELECT src_port, SUM(src_bytes + dst_bytes) AS total_bytes FROM sancp WHERE 
(start_time >= "2005-07-11 00:00:00" AND start_time < "2005-07-12") 
GROUP BY src_port ORDER BY total_bytes DESC LIMIT 10

This will produce a table like the following:

| src_port | total_bytes |
|    64805 |  4295814506 |
|    46617 |  3035558760 |
|    46456 |  1074729424 |
|     1709 |   736763893 |
|    46454 |   663853700 |
|    46448 |   659932663 |
|    50005 |   610899787 |
|      139 |   562557123 |
|    35175 |   546595066 |
|    46640 |   529514669 |

In either case, you can also report on incoming connections or data transfers by replacing src_port with dst_port.

How can I archive the database?

See the MySQL page for some instructions provided by qru.

Sguild was down for a while, and I have thousands of SANCP files to insert! HELP!

As you know, the Sguil sensors are pretty much independent of the actual Sguil server. They'll happily continue collecting data, even when sguild has been down for a while. When the server comes back up, the sensors will automagically reconnect and send all the queued data. Network session records collected by SANCP, however, can pose a big problem. After an extended outage, there could be thousands, even tens of thousands, of files waiting to be inserted.

You can usually deal with this situation by temporarily disabling index update operations on the affected SANCP table. See this blog entry for more detailed information.

[Snort Alert(1:#####:0] messages appear in the client's "Event Message" column instead of the rule's correct narrative description

Chances are that the specific sensor's file isn't up-to-date. Use the script to create it (something you might want to add to your Oinkmaster's batch file): /usr/local/snortrules-<sensorname>/ > /usr/local/snortrules-<sensorname>/

Then restart barnyard for that sensor:

/etc/init.d/barnyard-<sensorname> restart

This routine will NOT correct the entries that already appear as "[Snort Alert(1:#####:0]", but new entries should have the correct description.


How can I tell if threads are enabled in my Tcl interpreter?

Threaded Tcl causes lots of weird problems for Sguil, as mentioned above. To find out whether your Tcl interpreter was compiled with threading, execute the following command from within the Tcl interpreter. As shown in the example below, if the result is "0", threading is disabled.

tcl>info exists ::tcl_platform(threaded)