my time at splunk .conf 18

A week ago I flew down to Orlando, Florida to attend Splunk .conf18. In thinking back on this, I have to say this is the very first vendor-specific conference I think I’ve ever attended in my 15 years in IT. Based on who you ask, the con itself had 7500-9500 attendees in its largest event to date. That’s pretty impressive! I attended as many talks as I could, and I left pretty happy with the content I consumed. The talks and slides are all available online for consumption.

Day 0 – Sunday
My goals for this day were just to get to Orlando and settled into the hotel and do some recon of the grounds and environment. On the plane, listened to some Darknet Diaries; finally finding some time to do some podcasts! Took some time to hit the Boardwalk on the ground and already get sick of the heat and humidity.

Day 00 – Monday
Goal today was to get registered for the con! The line was super quick, even at 10:30am with the masses to get checked in, get a badge, pick up the backpack/water bottle freebie, and then pick up the freebie hoodie. Beyond that, this day was pretty casual until the evening.

First Timer Orientation talk – This was a nice intro to the con, even though the room was moved and I didn’t hear about it until a co-worker texted me. I guess I need to click update notices in the event app! (Come on, I’m in security, I don’t click accept/download buttons unless I have to.) Also, this was the only talk that I attended with a drink-in-hand speaker. (I’m not a huge drinker or want others to drink, but to me, this still sets a tone and statement for the sort of partially or fully informal a venue may be. This is why I like smaller cons over larger vendor ones.)

Welcome Soiree – This was a neat way to get people to the vendor floor: an evening event with free food and alcohol stations throughout the vendor floor. Scoped out vendors, splunk experts, projects, and plenty of swag. And I will admit, I evaluate vendor booths on three things: 1) whether I know and like them as a product/company and want to say hi, 2) whether I want some of their swag or not (either for me or to give away to others), and 3) whether I want to buy them (and I’m not a purchasing approver, so that’s pretty much no one). I had fun down here, though someone kept turning on music every now and then and it was ridiculously loud.

Day 1 – Tuesday
Visionary and Roadmap Keynote + Breakfast – For the morning keynotes, buses took us to ESPN Arena where we picked up breakfast bags before taking seats. After the talk, I don’t think the bus crews were ready for the flood of people, and organization broke down pretty hard on one side of the venue, but we all got back in decent time (albeit later than intended due to the overlong keynote).
Security Super Session: Splunk Security Vision and Roadmap

Security Super Session: Splunk Security Vision and Roadmap – A strong, high-level look at Splunk and using it for security operations. Not much to say on this one. The diagrams are wonderful (and would be used in several talks I’d see over the course of the con) for designing your security operations around.

Find and Seek – Real-time Asset Discovery and Identity Attribution Using Splunk – I didn’t actually see this talk. Tuesday was the one day where I was all over the grounds for various talks, and required buses to get me places in time, and the buses were still a little chaotic. I was on time getting to this talk, but after about 15 minutes after the start time, we were all still waiting outside the room. Thankfully, it was right next to a sandwich distribution station, so I just left with my lunch to eat elsewhere. I’ll have to catch this recording later.

Let’s Get Hands-On with Splunk Enterprise Security, Splunk Phantom, and Real Boss of the SOC Data – This was the one “laptop required” talk I attended, and honestly one could have been just fine sitting back and watching along. This session had several hundred people in it, and as such you have to expect them to move on and not wait for anyone, and move on they did! Thankfully, this is the introduction talk for a broader and slower workshop for security people to get from Splunk throughout 2019. As it was, I really enjoyed getting hands-on a bit with some practice data for finding attacks. The data itself was used in the BOTS competition the previous evening. While I’m new with Splunk, it’s these hands-on demos and doing actual things with the data that get me excited, rather than high-level, perfect-situation statements.

Threat Hunting and Anomaly Detection with Splunk UBA – I really liked this talk and speaker. While nothing about Splunk and anomalies and hunting were new to me, I really loved the best/worst practices examples. That’s the sort of detailed, technical stuff that I eat up, rather than non-filling high-level statements.

Pub Crawl – Similar to the soiree from the previous night, only with craft beer stations and less food overall. Other than the alcohol and snacks, I didn’t really need a second round through the vendor hall.

House of Blues – We also got invites to a party at the House of Blues. The music was just passable, but it was an excellent buffet, and I got a chance to sample the infamous Voodoo Shrimp (which was basically forgettable, to me). The best part was just getting another evening without a food bill!

Day 2 – Wednesday
Product and Technology Keynote – I’m not a huge breakfast person, and I found out you can watch the keynotes online, so I didn’t even bother heading out to see this one live. I opted to stay near the hotels and not fight lines for a latte.

Hacking Your SOEL: SOC Automation and Orchestration – I love technical talks, less so high level ones. But if there is one talk that I’d recommend that is high level about SOEL, and SOAR, and SOC automation, I’d point people to this one. The speaker just plain made sense of all of this. Sure, it was high level, but also detailed enough to formulate a roadmap for the future on the topic. One of the more solid talks I attended.

Attack Surface Reduction: Using Splunk to Spot the Security Flaws in your Network – The description for this was probably reflective of a longer talk that got cut down. This talk ended up being basically a firewall review 101 session, but using Splunk to view your logs for activity on firewall rules under review. I did learn just one thing from this: monitor for sessions that hang, i.e. no endpoint listens on the target port anymore. I probably would have done that, but I think it’s important to keep that situation in mind. The rest was really pretty newbie material.

Which brings me to one of my main challenges: Finding the right level of talk for the topic. For instance, I’m a newbie with Splunk, but security concepts I’m very deep with, both defense and offense. I would love to have known this talk would be at a newer level of security, as I would have avoided it. This would apply to some of the threat hunting and SOC automation talks, which sometimes felt like they were just saying the same high level things over and over without a ton of deeper substance (i.e. for people less senior than I). This might not be a con issue, as it might just be my inaccuracy with using the con properly, i.e. less talks, more 1-on-1 and breakout discussions.

Cops and Robbers: Simulating the Adversary to Test your Splunk Security Analytics – Came into this very interested, but also skeptical on why the heck I’d want to spend time automating attacks like I’m some QA team. But this talk made a great case for why you do this, and how you approach it, particularly with Phantom and some other tools. Looks very cool for use on an internal testing team that evaluates not only internal response and controls, but also can test security products and even do some training exercises with your Splunk teams.

WMI – The Hacker’s Chocolate to their Powershell Peanut Butter – Probably the deepest technical talk I saw at the con dealing with attackers using WMI, WinRM, and Powershell in modern attacks, often going fileless, and how you could use Splunk and general logging to hunt these compromises down. I really enjoyed it, and was a great reflection on the Splunk security research arm.

Monitoring and Mitigating Insider Threat Risk with Splunk Enterprise and Splunk UBA – As a Splunk newbie, I wanted a mix of talks on some of their products and how I can wrap my security team around them and my own priorities and goals. This was a good talk about implementing insider threat detection using Splunk UBA. I’ll likely revisit this again as we start our own projects on this in the coming quarters.

Search Party! At Universal Islands of Adventure – Such an absolutely fun time having the park to ourselves to avoid lines and endless children in order to ride Hogwarts Express, Harry Potter’s Forbidden Journey, and the Jurassic Park river ride. The Express was super fun, and Forbidden Journey ride absolutely awesome, and the Jurassic Park ride a fun mess that stopped 3 times and ended up taking about 30 minutes to get through. The walk around the park was fun, though the back half through Marvel and the Comic Book zones were plenty unexciting compared to the other areas. Really wish we had more than 2-3 hours, but fun and free nonetheless!

Day 3 – Thursday
Guest Keynote: Steve Wozniak – I don’t really have a huge desire to listen to Woz; smart dude with lots of money and the ability to opine about technology. Fine. To make sure people made it to this talk, it was not broadcast like the other keynotes, so I just opted to skip.

Overall on this day, the food stations and snacks were far skimpier on this day. I still never had to visit the main food tents, but I definitely had to look for food myself otherwise.

“MAKE IT RAIN!” How to Save Money Monitoring, Managing, and Securing Your Cloud Using the Splunk App for AWS – By now, I know that I should expect high-level statements when I see CEO, CTO, or other high-level manager titles in the speaker list for a talk. And then a talk like this comes around to prove me wrong. (I’ve honed my stance on this to apply only to Splunk as a company itself when its higher-level managers speak.) This talk was an actionable demonstration of tying some important AWS logs into Splunk and showing how that is valuable for operations and even security. A slightly short talk, but really nice to sit through as someone new to Splunk, new to AWS, and subsequently new to doing them at one time.

From Threat Modeling to Automated Response – Identifying the Adversary and Dynamically Moving to Incident Response – Yet another talk about threat hunting and TTPs and adversary profiling. A good talk, but I don’t think it included anything that I didn’t already know.

If there’s anything in my year that will define it, it’ll be the prevalence of Kill Chains, Threat Profiling, and Threat Hunting. I can’t escape the same ol’ statements about them. I had it throughout the Cisco CCNA Cyber Ops course, the SANS FOR508 course, multiple talks at Splunk .conf, and beyond. I’ve long had a post waiting about how and why threat hunting is such a deal these days (it comes down to getting internal value and blending offense into the internal blue teams, plus trying to make sense of the new breeds of security tools that don’t just alarm on bad, but require human decision-making to piece together multiple things…).

Blueprints for Actionable Alerts – This apparently is a version of a talk done for several years, and it kinda feels like it. For some strange reason, I didn’t get much out of this, though on the surface I should have. It’s really a discussion in figuring out how to tackle an environment with 4000 alerts in a day, and reducing that piece by piece to be manageable and useful. I think everyone sort of does this their own way, which all sort of dance around the same gameplan.

Splunk P30X: Become a Lean, Mean, Splunkin’ Machine in 30 Days – Probably the best and most useful talk I attended at the con. The point is to have an actionable, lunch-hour plan to tackle and do various Splunk activities to culminate in being able to pass the Fundamentals exam at the end. I loved the actionable approach to this, as well as the follow-up activities the authors are releasing to support it that I can directly consume. Not only the 30-day plan, but also additional materials for newbies. Wonderful talk!

Day 4 – Friday
Nothing much exciting here, just a full day of getting back home.

Overall Thoughts
I loved the overall experience and benefits to going; it was fun, got to visit a fun park, and so on. This could double as a family vacation if you brings the family along. Next year, the con is in Vegas, and I’ll admit that has less appeal to me as a venue/area.

If I go again, and have others with me, I’ll lobby somewhat hard to get signed up for one of the competitions they hold, either Boss the of NOC or Boss of the SOC, where teams pour over and parse out data to answer questions about operations or security incidents, respectively.

getting into and growing inside the infosec industry

A revival of sorts on content from BHIS on getting into the Infosec industry, including A Career in Information Security FAQ Part 1. Pretty good stuff! But this section really stuck out to me:

The customer service, tech support, help desk, etc., these jobs are crucial to forming a solid background in computer science. Learn how to solve problems effectively. Learn how to discern between useful web search results and wastes of time. Employers don’t want to hire you for what you know. I generally believe that anyone (some computer background) can be trained to accomplish digital tasks. I can’t train you to manage your time well. We can’t train people to be nice, treat others like human beings, or to be steady under pressure. And truly, those are the skills that will put you at the front of the line. It worked for me and everyone else at BHIS too.

I would include other skills such as asking questions, being curious, being tenacious, looking at ways to break and fix things, and having a quick mind to solve puzzles.

And to be honest, that whole post is a wonderful bit of encouragement and advice for anyone to read, newbie or jaded veteran. Things like, “That motto is ‘Fail Fast, Fail Often, and Fail Forward’. When you are working on solving a problem spend more time failing and less time analyzing the problem from a distance,” and “One of the most critical skills in information security is the ability to go off script.” That’s gold right there, alone!

Addendum: I do want to point out the question towards the bottom of the post about the biggest hurdles in first getting started. And it might be obvious, but it bears constantly repeating that the two biggest items are 1) experience, and 2) imposter syndrome symptoms. The former is just something you get past after a few years of work. The second is a lifelong personality and internal compass issue where we just have to come to terms with the scope of infosec and how no person can begin to swallow that whole ocean. Learn what you can, balance your life, fail fast, move forward, get better, succeed.

the cat and mouse game of security improvement

I don’t often find fairly general articles to have enough interesting nuggets and quotes to bother saving, but sometimes they just flow so well and include plenty of head-nodding things to agree with, all with wording that I appreciate. One such article came across from Dark Reading, Think Like an Attacker, How a Red Team Operates. Dark Reading seems to like limiting the ability to read articles, so I don’t mind being a bit liberable in pulling out quotes I like.

“The whole idea is, the red team is designed to make the blue team better,” explains John Sawyer, associate director of services and red team leader at IOActive. It’s the devil’s advocate within an organization; the group responsible for finding gaps the business may not notice. I just love that sound byte. I want that to be my elevator job description.

“The main function of red teaming is adversary simulation,” says Schwartz. “You are simulating, as realistically as possible, a dedicated adversary that would be trying to accomplish some goal. It’s always going to be unique to the target. If you’re going to get the maximum value out of having a red teaming function, you probably want to go for maximum impact.” The early part of the article goes a great job of succinctly comparing pen testing and red teaming while also illustrating how these have changed as time has moved on. Old school pen testing has shifted to be called red teaming as a way to further differentiate as pen testing has become commoditized.

The team ends up chaining together a small series of attacks – low-level vulnerabilities, misconfigurations – and use those to own the entire domain without the business knowing they were there, he says. Typically, few employees know when a red team is live.

Red and blue teams may work together in some engagements to provide visibility into the red team’s actions. For example, if the red team launches a phishing attack, the blue team could view whether someone opened a malicious attachment, and whether it was blocked. After a test, the two can discuss which actions led to which consequences. Beyond actually enjoying it, this is my whole value proposition for my interest in offense and red teams: It makes my defense better. Which makes me get better on offense. Which makes my defense get even better… Getting a root shell or DA credential is the addiction, the satisfaction is passing on the information to make improvements.

More and more companies are starting to realize if they limit themselves to the core fundamentals of security, they’re waiting for something bad to happen in order to know whether their steps are effective, says Schwartz. Red teaming can help them get ahead of that… Many companies are building red teams in-house to improve security; some hire outside help.

The main reason behind building a red team internally is because as it grows and improves along with defenses. As security improves, so do the skills of red teamers. Offensive experts and defenders can attack one another, playing a cat-and-mouse game that improves enterprise security, he continues. Internal teams are also easier to justify from a privacy perspective.

Overall, the pros argue a full red team can help prepare for modern attackers who will scour your business for vulnerabilities and exploit them – but they’ll help you stop real adversaries.

“The difference between a red team and an adversary is, the red team tells you what they did after they did it,” Schwartz says.

That’s such a strong ending to this article, that I had to pull a bunch out right there. Wonderful!

rapid7 releases 2018 under the hoodie pentesting report

Rapid7 has released the second edition of their now-annual “Under the Hoodie” report, which is a compilation of information and statistics compiled across Rapid7’s penetration testing teams. There’s really probably nothing terribly surprising in here, but it’s always nice to have some raw numbers of anecdotes in pocket for various conversations. Here are a few interesting tidbits or quotes I wanted to pull out.

“Relying entirely on an automated solution or a short list of canned exploits is likely to meet with failure, while a more thorough, hands-on approach nets significant wins for the attacker.” This statement has importance for internal security testing, third-party testing, and also for defenses. The first two can be obvious, but the last one about defense helps frame models, for instance the impact of an internal threat or an attacker specifically targeting a company rather than just automating a search for opportunistic moments. It also speaks between the words that an attacker with some hands-on effort and not time-boxed like a pen tester can see success.

“Furthermore, these results imply that if the penetration tester is not detected within a day, it’s unlikely the malicious activity will be detected at all.” Detection is a big deal. I’d also throw in the practice of threat hunting to find successful attackers who have gotten past the outer layer of defense and alarms. I recently deleted a draft about the whys and hows of the rise of threat hunting/intelligence (I posited it was a combination of the reduction in AV/IPS signature success, the complexity of environments, the rise of offense-friendly staff looking for offensive things to do, and other factors…). Prevention is important, but solid and effective detection matters.

“The number one issue that causes the most consternation among penetration testers is solid network segmentation. If they cannot traverse logical boundaries between environments, it can be extremely difficult to leverage a set of ill-gotten workstation credentials to escalate to domain-wide administrative privileges; even if a powerful service account has been compromised, if there’s no route between targets, the pentester must effectively start over again with another foothold in the network.”

Other factors that cause frustration for pen testers are multi-factor authentication for accounts, least privilege practices on accounts, strong patching and vulnerability management practices, and awareness to spot and report phishing campaigns, social engineering, and other low-tech attacks. What’s fun is how these 5 items are disciplines that blend security with other, very different departments: The network team for segmentation, systems/developers for 2FA/MFA, systems for patching, IAM for least access, and everyone for awareness. You can’t just boost one area of the company (or just security itself).

changes to the site – sidebar links are a bit of a relic

Recently went through and cleaned out dead links in my Feedly news feeds. Not only did this kick in plenty of nostalgia, but also reminded me that I should update the sidebar links on my blog! While going through these, I sat back and thought about how time-consuming this process is, how annoying it is to update wordpress themes (just give me a raw txt file that I can put code in rather than wrestle with weird interpretations and random carriage returns!), and for what personal purpose this even mattered.

In short, I need to sit back and think about what exactly I am doing with this blog site and how to make it better for me. Moving to hosted WordPress has helped with site maintenance, but has made other things more difficult. In the past, I always edited files by hand and coded things directly, but these days I tend to use the WYSIWYG, but it’s not usually quite what you see…it’s more like wrestling with a slippery eel to get things to look the way I want, rather than the way the themes want. This makes updating the sidebar annoying. At best.

There’s really four parts to my blog: the posted content, the sidebar link list, comments to posts, and the links at the top that spider out to other things about me, with this blog page being the nexus point where they converge.

The sidebar links
The extensive sidebar list of links has been part of the site’s identity since the beginning, but it’s also an old school relic.

The list is somewhat save-and-forget, except for some of the most-used items. The rest, I honestly forget are here. For some, it’s still just better to use Google to get the latest, greatest.

These links are also best used by me, and probably not clicked on by anyone else ever. The list is roughly doubled up in: feedly, podcast subs, youtube subscriptions, twitter follows, discord server memberships.

I do know that clicking links will place referal pokes to the targets…maybe. It’s one of those ways to get noticed, but I’m not sure blogs and/or comments are “noticed” anymore or really followed at all. A blog used to be your focal point online that other things revolved around, but these days the social sites have supplanted them. There is also so much flow these days, that I don’t ever really “catch up” on blogs I’ve missed. They’re much like IRC or Twitter; you pop in and maybe look at the recent buffer, but the rest of the log is in the past and there’s no reason to spend that time reading backwards.

The bottom line: the link sidebar is a relic with questionable value to me, and is annoying to update.

The comments
The comments are easily forgotten, since I don’t get many and don’t expect many. The problem is the lack of two-way discussion. Comments on blogs are often post-and-forget, never looking back for an update without specific effort to do so. It’s far better to follow and tweet to someone on Twitter these days, or in extreme cases, find someone on a discord/slack/IRC.

In the past, prior to all the social networks, blog comments were useful to expand your exposure. Comment on someone else’s blog, put your own link in the comment, and likely get a poke or comment back in return. Again, though, today that is better done in Twitter/discord and by posting content that actually is useful to be consumed.

To be fair, comments are cool, akin to a Like, but dialogue anymore is best done elsewhere.

The bottom line: Ultimately, comments are an after-thought these days on any but the most popular blog sites, like Krebs’ blog.

The blog contents
But that does bring up the question of why I should ever update the blog? I honestly don’t look back on many things. The biggest two reasons: 1) shows off my interest and 2) allows me to organize and solidify thoughts. I may not reference the post itself ever again, but the act of writing something out helps ingrain the information and thoughts.

It’s not something I really do for anyone else except me, and as a way to sort of demonstrate my interest/enthusiasm/participation in the greater communities.

The posts I most-often re-reference myself are the personal ones like my yearly goals and results, or links to really informative checklists and processes; things that I struggle with putting links to in the sidebar only to forget them!

The bottom line: I still like maintaining the blog and it does have personal value to me.

The personal link nexus
I can’t see this going away anytime soon except maybe on a github page with a similar list of links. The whole point is to act as a point of convergence for my “stuff.” A place to find my Twitter link, LinkedIn page, Github page, and just a little bit about me (that age-old bio or About page that I feel is still necessary to tell your story properly).

Being able to control this convergence is still an interesting deal, as it lets me decide whether I want my personal name attached to a particular screenname somewhere, but as I get older, I also care less except with my own personal threat models.

As a bonus, I still love my personal domain.

The bottom line: I still plan to use this personal domain and resident site to be my nexus here, and I think I’ll expand the links a bit to include Github and maybe some other spots.

The links on the sidebar …could…be put into a github instead of this site, and probably more easily updated, too.

I could use github to also save backups of things like my podcast opml, feedly feeds export, and so on. Things that are not sensitive or inherently private.

A github is at least easier to update. And while it might not fix anything about my list of links and its usage, maybe it’ll help me pare it down a bit. Better yet, if I have a feedly export, why bother with the blog/news lists?

There is also the choice of having a private github for a few other things. I definitely don’t want to make it a huge “backup” of things, since that’s what other file-sharing services are sort of for, but at least some of my online presence and “home” page can be tailored a bit in private.

bloodhound, measuring how exposed a domain is

Recently watched a talk about a tool I’ve known about for a while, and just haven’t gotten to in my to-do list. I used the output of the tool briefly on a target to much success. And after watching the talk at SecDSM, I’ve gotten excited again about employing this at work someday.

Bloodhound by researchers at SpectreOps is a tool that exposes Active Directory permissions and relationships with the goal to achieve Domain Admin (DA) or High-Value access into AD to pwn the domain entirely and win the game. This might sound unexciting if you only think about accounts and groups and group memberships. But Bloodhound goes deeper and wider by looking at actual underlying AD object permissions and how those objects relate to various computers in the domain.

During the talk linked above, one of the best parts is near the end when they talk about metrics, and I really loved these metrics which effectively measure how much exposure the domain has and how much effort an attacker will have to exert to pwn the environment with regards to AD permissions. It also illustrates opportunities to detect the attackers.

  • Users with Path to DA (target: 5%) – The lower, the better, as you really don’t want to think that every user that could be compromised could lead to the end of the domain.
  • Computers with path to DA (target: 5%) – Same story here, you don’t want to think most systems are just a few hops away from DA. Even a single malware/phishing success is dire!
  • Average Path to DA Length (target: 5) – The longer the better, as you want attackers to go through as many steps as possible to get DA.

hackthebox progress over the summer, meeting and exceeding my goals

Part of the reason it took so long to take my GCFA exam was the splitting of my study time with Hack The Box progress. Earlier this year I bought into VIP access to HTB, and I wanted to keep practiced up with, and learning new, offensive skills. I did more than I was expecting, and after making some friends smarter than I, was actually able to far surpass my goals and expectations to achieve 100% completion and top Omniscient ranking sometime in mid August. I still have to go back and properly learn some of the things I found way too difficult to do alone (let’s face it, the best people are the best due to learning through teams, and the best red teams have multiple people): namely binary exploitation and reversing. I get how they work, but I need my hand held way too hard right now! I still also have the “optional” sections to complete (Fortress and Endgame), and I’d like to dive into RastaLabs and Offshore, probably in 2019.

Really, my main goal was to keep the skills and processes I developed in the PWK fresh, while also learning new and more advanced tricks and tools and techniques. And I feel like I’ve succeeded in that aspect. These skills get dusty and rusty if not practiced regularly, either on one’s own or while in the course of work duties.

passed giac certified forensics analyst (gcfa) exam

This past Friday I had the pleasure to sit for the GCFA (GIAC Certified Forensic Analyst) exam and pass with a 94% score. Quite the relief after a summer of (somewhat slowly) making study progress. In May, I attended the SANS FOR508 training at SANS West (San Diego). Shortly after, I took a bit of a break, and since then have slowly studied and gotten ready for my exam attempt. I’ve blogged about the course before, so I’ll try not to rehash anything. The course was my first SANS experience, and this exam was thus my first GIAC exam experience as well.

Did you take the practice exams? Yes I did. In late August I took the first practice and scored an 83% with only about 9 minutes remaining at the end. At this point I was pretty nervous, but I also was not quite done with my study plans, either. A week later I took the second practice and scored an improved 93% with 30 minutes to spare. They were definitely helpful to see the exam format, get familiar with the interface, and also get a feel for the question style and feel. The real exam felt extremely similar, and while the questions were not duplicated, they felt written by the same author(s) and with the same feel as the practice ones. For the second practice, I turned on the ability to see explanations for both correct and wrong answers, while on the first attempt I didn’t know that option was present and just saw my missed answers. Also, I limited myself to my books and my digital index with no spreadsheet searching functions; just scrolling and eyeballing. I also had paper nearby to write down any concepts I missed, or those that I got correct, but struggled with, for review later.

Would you recommend the practice exams? Yes! I probably could have passed if I had skipped them, but they did absolute wonders for allowing me some feedback on where I stood and gave me a chance to gain confidence and familiarity with the question styles. The practice also gave me two chances to test out my index, hone it, and become even more familiar with the books, adding to my efficiency in an exam where time is precious. Most importantly, this whole study process helped me grasp and “get” the content so much better than just the course alone.

Did you have your own index for the exam? Of course! My goal with the index was to use it to not necessarily answer every question for me, but to give me enough information to come to a probable conclusion, and to then point me to the correct places in the materials to confirm that answer. My true place for answers is the books, and I wanted to provide enough context to be able to look up the appropriate information in the right place when I came across a term or subject in the exam. My index ended up being about 45 pages landscape, with 1536 rows at 8 point font. Having it top-bound was wonderful (about $13 printed online at Fedex/Kinkos).

When creating my index, I started out with a spreadsheet tab for each book. I had four columns: SUBJECT, TERM, DESCRIPTION, BOOK-PAGE. In retrospect, the SUBJECT column was never used by me, and I’ll leave it out on future exams. For the spreadsheet tabs, I’d leave the notes in chronological order. On a separate MASTER tab, I would regularly copy/paste the other contents into it and sort by the TERM column to see my MASTER index. This MASTER tab was what I would later print out.

If a term appeared more than once, it would get more than one entry. I didn’t want to squish BOOK-PAGE numbers into a single row at all. For multiple page mentions in a row, I’d make highlighter arrows in the books to prompt me to look ahead if the topic continued. If a topic had multiple terms or an acronym, I’d include all of them in their own entries. I would try not to do the whole “See Topic X.” I did early on, but hated it, and went away from that later (the one time I came across such an entry during the exam, I cursed myself). The goal was to go from Index to Books, not Index to Index to Index. I tried to be complete enough in general in the Index, but invariably questions would ask for very detailed specifics. And I didn’t want to solely trust myself to transpose the terms correctly, so I didn’t try to be exhaustive; as said earlier, get to the books efficiently! I also indexed terms on the blue and red posters. (Both of which I used in the exam, though much of the information can, in fact, be found in the books.)

I initially limited myself to a single line of description per term, but eventually I acquiesced and allowed myself multiple lines (hold Shift when pressing Enter while in entry mode to add a newline inside a cell). My index would have been longer and even more immediately useful had I not decided that pretty late.

I also used sticky tabs at the top of the books to mark key pages and sections. This way I had the option to skip my index altogether if I knew what general section I wanted to flip to. I used them a lot, too, not just during the exams, but when studying as well! I honestly think doing this saved my butt.

To be honest, I’m a natural information organizer. If I were more of a social person, I’d probably be a project manager! I’m also a note-taker, so doing this index was a loving exercise, rather than a chore. It also helps to remember that this index is a one-time use item. It doesn’t need to be perfect or pass muster for inspection by an editor. Everyone has their own level of perfection they need, but I know my index isn’t without mistakes, has holes, and maybe has more or less than it should. But that’s why I wanted to make sure it led me to the books as much as needed; trust myself, but verify the answer!

What was your study plan? After the 6-day course in San Diego, I probably took a good two weeks off. After that, I started going through the course books again. My goal was to read every word of the books (slides and notes). And yes, that took a while. I would highlight orange every tool mentioned in the books, and write it into a separate notebook of mine (my own personal list of tools). I would highlight key topics and statements with a green highlighter. After about two books, I actually started adding key terms, concepts, tools, and topics into a spreadsheet to begin my actual index. I then went back and caught up the first two books with a quicker pass.

Once done reading the books, I accessed the On Demand content to listen to the lectures again, follow the slides, and follow along in my books. This essentially was another pass through the material, and a second full pass to populate my index with things I missed or wanted to flesh out. For instance, I didn’t decide to put full command examples until my second pass. While winding down the On Demand materials, I also started going back and doing the lab exercises again, at least as much as I could (some tools expired). (I did *not* actually include the exercise workbook notes into my index, and I wish I had done so.) Doing all of those above really helped cement the material in my head, but also caused me to really actually *get* it, if that makes sense. Context fell into place, reasons for various things, and it just all feels natural and confident now.

In the day or two before the exam, I limited myself to just flipping through the books. I took the early part of that week off, and doing this allowed me to get familiar with the tabbed sections again, for quick reference and flipping to my tabs.

How was your actual exam experience? Pretty good! I got in early and got going pretty well. The exam itself is a brutal slog of 3 hours, and I definitely made plenty use of it to be as sure of my answers as I could be in a short period of time. Even with my index, there were a few questions that had me somewhat stumped or utterly unsure where to look for that information. Thankfully, in other respects, my index may not have had the proper information, but my knowledge of the books would lead me to the right sections. The exam questions were some of the best-written questions I’ve seen. To the point, clear, proper English, but you still have to read them carefully to pick up on any twists or tricks afoot. Honestly, the questions and answers were wonderful and did nothing to detract from the experience and ability to demonstrate mastery over the topics.

Is there anything with the materials brought on-site for the exam that you’d do differently? Without getting too specific, I think it would have been useful to better document or print screenshots of the output of the tools mentioned. Not all of them, since there’s a ton! But any of them are fair game for questions. Ideally, it should be enough to have used any and all tools during the labs or self-study when re-doing the labs. But that does take effort, as the labs themselves will not use every plugin and tool mentioned in the books. I also am not sure how one would consume such print-outs efficiently while taking the exam, so maybe I was better off without them!

warren buffet advice for people

Many, many idealistic years ago I used to collect books of neat things, sayings, zen koans, and various other things to find serenity in life. Always very helpful. I used to keep many of these as a rotating email signature back in the 90s, which I honestly rarely ever enabled on messages.

Also very helpful are good lists of advice from people older and wiser. And I wanted to keep this wonderful, achievable list from Warren Buffet and apply it to all people not just the young:

Advice for the all the young people:

  1. read and write more
  2. stay healthy and fit
  3. networking is about giving
  4. practice public speaking
  5. stay teachable
  6. find a mentor
  7. keep in touch with friends
  8. you are not your job
  9. know when to leave
  10. don’t spend what you don’t have

an oscp journey with a m0nk3h

I see lots of OSCP reviews, and I usually don’t post or point to many since, well, there’s so many and students should start learning to Google early on. But this one by m0nk3h is amazingly detailed and quite useful to check out for anyone hoping to take the OSCP or in the process of it. It goes above and beyond the simple PWK and experiences review, and also include tips and tricks and useful commands in one shot.

In fact, scrolling back in the dude’s history, I like lots of his posts and the formats, good stuff! (And I was curious, like everyone who attempts the OSCP exam, on his background to see how that compares or may impact.) I particularly like the SpectreOps course review, as I’d love to take that some day.

a journey into a red team

It’s been a pretty busy year so far, and I’ve been remiss in posting things. I think one influence has been my use of Pocket. Instead of posting things here, I put things I want to check out later on into Pocket. Of course, then I just don’t get back to Pocket!

Anyway, here’s a slide deck for, “A journey into a red team” by @MrUn1k0d3r. I imagine at some point perhaps, the presentation will pop up on the NorthSec Youtube channel.

my first netwars experience and netwars coin

During my first SANS experience last week, I also opted to participate in NetWars Core the nights of Day 4 and Day 5. This was also my first NetWars experience, and I came in having pretty low goals, since I didn’t really know what this was all about. I basically wanted to see the top 10 leaderboard for first timers and unlock Level 3 by the end of the event. Turns out, I unlocked Level 4, held onto overall first for several hours, and finished in 2nd place amongst the individuals (and 6th overall), earning me a NetWars coin and invite to the Tournament of Champions in December!

For those unfamiliar, NetWars Core is a two-day event held on site for 3 hours both nights. You show up with a laptop in hand and get handed a USB stick. On the USB stick are some supporting documents and a virtual machine image to load up. Once loaded up and signed into the event, the countdown begins! Once started, the event website allows access to a battery of questions whose answers are found either in the supporting documents or on the VM itself. These questions cover a wide variety of information security topics, from linux and windows systems administration commands, technical trivia, analyzing forensic evidence, examining network traffic, decoding hidden messages, and reversing malware. There are things for defenders and attackers alike.

Upon arriving, I was given the event USB stick, an instructional piece of paper, and 2 drink tickets good for free drinks from the open bar in back. The tickets certainly beat paying up to $12 for a glass of wine! And as each night moves on, you can get more tickets from the minders handing them out. The dimly lit room itself slowly filled up with fellow geeks, the glow of laptop monitors, and the swell of some light techno music from the speakers up front.

The desks all have power provided and a wired switch for use if one does not want to trust the wireless network, which itself was solid the entire event. The instruction sheet gave details on signing up for an account at CounterHack and the username and password for the Virtual Machine. A web-based VM could be connected to from the CounterHack site (with added latency, of course), or one could just use the VM included on the USB stick.

The USB stick included the event VM and some additional files. Before the event truly started, I copied all of the files to my local Windows system and fired up the VM in VM Workstation 14 Pro (trial). It converted and fired up without issue, and default networking settings allowed it access out to the Internet (needed) through my host laptop. I rebooted it and gave it some more RAM for good measure.

After waiting around a bit, the event kicked off! I started out super slowly as I acclimated myself again to the Linux VM, sheepishly Googling up some commands that I should have known, and otherwise having some issues getting into a groove. But eventually I did hit a groove.

Level 1 and Level 2 are basically traditional Jeopardy-style CTF questions. You are asked something about a file or command or binary or whatever, and you work to provide the answer. Sometimes it just means running a few commands, sometimes it means doing some forensics or attacks on various artifacts. Each question has points assigned to them. The more questions you answer, the more points you get, and the more questions open up below those. The first incorrect answer on all questions didn’t cost anything, but subsequent tries would cost more and more points (up to a certain amount).

Most questions have hints you can unlock. These hints do not cost any points, and you can unlock as many or as few as you want. The only role they play is as a tiebreaker in the unlikely event of a tie in points. The hints themselves range from terms you can Google all the way up to actually giving you the command(s) to run to find the answer. This hint system makes the early parts of the event very accessible to anyone with even passing Linux experience.

After hitting a groove and getting a feel for the questions and VM, I appeared on the top 10 leaderboard, and for the rest of the 3 hours on Day 1 I skipped up and down with my fellow geeks from about 10th place up to a peak of 3rd place or so for brief moments. Early on, one team surged very far into the lead, and the admins of the event sought them out. Within a few minutes, their team was removed from the boards. Turns out they had some veterans on the team, and to avoid discouraging other teams, the admins “ghosted” their score off the board.

The scoreboards are broken down into a top 10 list of individuals and a top 10 list of teams. I believe teams can be 5 or less members. In other events, I think veterans and first-timers are also separated out, but for this event, we were all put into the same boards (they gave a reason why, but I didn’t follow it).

At the end of Day 1, I was firmly into Level 2 with 100 points, and sitting at 8th place on the boards. The first place individual had 146 points and the highest team (visible) had 155 points. I retired to my room, but spent the next 3-4 hours working on various questions that I hadn’t gotten to. I think the small break I took to get to my room and settle in really helped, as I hit a stride over those few hours and solved quite a few puzzles and staged up the answers for Day 2. While the game system itself is closed overnight, as long as you keep the question window up and have any hints you want opened up, you can still see and work on the questions on the local VM. This is true for Level 1 and Level 2, but not for subsequent levels.

Day 2
As Day 2 started, I had a flurry of points submitted in the first 60 minutes, and I actually surged into first place overall with 241 points and 2.5 hours remaining. At this point, I hit a wall. Over the next 1.5 hours, I held onto my lead with 269 points, but others were slowly closing in as I stalled out.

Eventually, at level 3 you can attack and attempt to infiltrate remote systems and networks as an attacker. and from there begin attacking other systems in that DMZ network and answer questions on a separate scoreboard. I was rusty with this, as my day job does not involve attacking systems. The Level 3 hints also became time sucks, with some seriously deep trivia that required heavy Googling and searching. Would help immensely to have a wingman doing just those items! Level 4 involves even deeper access. Ultimately, level 5 is reached and at that point those contestants have to defend their servers and services against others at level 5, while attempting to attack the others and earn points through uptimes.

I spent the next hour watching as a rival passed me up for first place, and the teams jostled for their positions. With 30 minutes left, the admins replaced the scoreboard with a countdown clock. After time ran out, winners were announced and I found out that no one else managed to pass me for second place. I finished with 275 points, with first place sitting at 297. The ghosted team of veterans finished with the top score of 370, but the admins also awarded prizes for the next team, which had 301 points; only 6 points ahead of their next rival!

Honestly, had I tackled this event last year, fresh off my OSCP certification, I realistically would have expected at least another 40 points or so. I had lots of time left, and not much comfort level with the attacks I needed to perform at the later stages.

I had an absolute blast with this event and the question formats. I’m looking forward to doing another one of these, or also trying out the DFIR and Defense ones as well.

Tips from a First-Timer
Spend Day 1 trying to unlock everything you can, including hints. You want to get as far into the levels as possible, with the ultimate goal of getting into the Level 3 and Level 4 stages.

Try to make sure you get the question finished that yields root access to the local VM. This is important in order to progress further.

With Level 3 unlocked, start attacking it right away. Again, it’s more about getting as far as possible, rather than clearing each level completely. The points-per-question trend a little bit upwards as you go.

One could conceivably unlock level 4 without doing much at level 3. Just from my perspective, I think getting dug into level 3 is more important.

The night between the days can be spent researching Level 3+ strategies, but also backfilling Level 1/2 questions and researching Level 3 hints.

Day 2 should be spent trying to open up Level 5 by performing successful attacks and eventual pivoting into internal networks.

For some added drama, the admins turn off the scoreboard for the final 30 minutes. If you’re feeling brave, feel free to bank some points to score during this time. This would be an excellent moment to finish submitting any level 1 and level 2 answers that weren’t needed to open up the higher levels. Of course, the downside might be encountering technical issues that prevent more scores from being posted, so do so at your own peril!

I strongly suggest writing down and saving out answers to a text file in the crazy event the VM crashes or becomes unstable. Near the end of Day 2, my VM’s xfce often became unresponsive, and I wasn’t in a position where I wanted to reboot it fully. I probably lost a good 30 minutes of productivity this way.

Lastly, have fun. Use those drink tickets if you are so inclined, and enjoy!

my first sans event with for508 in san diego

This past week I attended my first SANS event, SANS West in San Diego. I took the FOR508 course, Advanced Digital Forensics, Incident Response, and Threat Hunting with Eric Zimmerman. Overall, the course and SANS experience was excellent, and I hope to do it again next year!

I chose this course as forensics and incident response at this depth isn’t something I’ve heavily done. I’ve looked into malware incidents and done Windows admin troubleshooting for years, but this course takes things to another level with being able to dissect memory and disk images to find badness. My goal is to continue being well-rounded. I can attack systems, perform forensics on the attacks, inform my defenses to improve them, and complete the loop by doing better attacks. This course helped directly improve one of those areas.

I’ve also never had the opportunity to take training like SANS. There’s a whole list of courses I’d like to take and not nearly enough time to do them all, so I wanted to aim high and make sure I had plenty to learn for the experience. I think I was right on with my pick!

This course turned out wonderfully for me. Days 1 through 4 were spent looking for artifacts in Windows disk images and memory dumps using the SIFT Workstation. I knew enough through years of Windows admin troubleshooting to immediately grasp about 60% of it, and the remaining 40% was very accessible for me. On Day 3 in particular I learned some nuances I didn’t know before, like the shimcache and prefetch files and how to use powerful automated tools to make the work easier. Honestly, I can’t imagine the tedious work to find artifacts in gigs of data before these automated tools were around!

Day 5 went super dense and into relatively new territory for me, by diving into the deep end with NTFS forensics. Definitely the hardest day for me, and considering the long stares by just about everyone in the class, I wasn’t alone in this!

Day 6 involved a day-long capstone event where we broke into groups and did a blitz investigation of an incident. This was pretty fun, even though my group didn’t get a coin, but I feel like I learned a lot more by being able to not only put tools to work, but to also find many of the actual correct answers from the incident. It certainly helps the confidence level!

I also really love the process of forensics. It’s not about following a list of commands or a rigid sequence to find answers. It’s about running all sorts of things to find artifacts, and then stitch together a picture through fact and through some gut feels on what happened. You run 10 commands, put some things together, and maybe even go back and run some of the commands again, but with better information like specific times or location to do a deeper dive. Each piece of the puzzle found allows the investigator to look at every other piece of evidence with new light. I also learned the benefit of good corporate baselining and having the capability to pull full disk and memory images. This is a big deal for success with forensics capabilities.

What’s next?
First, I have plenty of studying and practice to tackle before the GCFA certification exam. After that, I can start planning my course next year, with the front-runner being SEC 542/GWAPT. Yes, this is an offensive cert, but it’s compelling right now to do something red team and shore up what I feel I’m weaker with: web app testing. If this training cycle continues, I’d like to alternate defense and offense each year.

Any lessons learned?
I hesitated bringing a second, portable laptop monitor. But there were several in class who had space at their spot for it. Considering my laptop of choice already has smaller resolution compared to current systems, I would have brought the second monitor if I did it again. Worst case, we’re packed into our seats and I don’t have room during the days, but I could still use it for NetWars or working in the hotel room.

That said, I wouldn’t mind a slightly more modern laptop, just from a resolution/screen real estate standpoint. My main hacking system is a ~2013 Lenovo X230 with upgraded disks and RAM. It’s wonderful for the most part, but I could use a newer model 470 or something that remains portable, but allows for good screen resolution.

Turn off host AV. Be sure you are comfortable using your VM host of choice. Don’t use a work computer unless you have full administrative control over it and its protections. This includes turning off malware tools, but also being able to access things like Google Docs.

I’m saving the details for a separate post, but always be sure to sign up for and try out NetWars!

desirable red team candidates article

I liked this post by Tim MalcolmVetter: How to Pass a Red Team Interview. Some takeaways from it are definitions of what red team means and characteristics of a good red team candidate.

Trustworthiness – I tend to stick to the term integrity, but mostly because I think it has similar, but broader meaning.

Know the role/know yourself – Kind of goes without saying.

Healthy competition – I like this one, and it should go without saying, but still unfortunately needs said. The offensive teams exist to help test, inform, and improve the blue team. This often just means being able to help the blue team stop attacks that get through and missed weaknesses, but could mean much deeper interaction.

Creativity – This is one thing I really like about security. In terms of normal IT operations, sure you can be creative with solutions and dealing with people, but often you’re still playing within the bumpers of a bowling lane, i.e. technology capabilities and limitations (developers excepted). With security, you can creatively look between the lanes, over the lanes, under the lanes. You get to poke in the places not normally poked and do so in creative ways on a red team. Good security is as much an art as an objective, to me.

Operational IT experience – I like seeing this item here, though I’m sure entry level security aspirants hate seeing it. But it continues to be true, even more so for a red team member whose goal is to inform the blue team intelligently. In order to do so, you need some measure of understanding about what the blue team is doing, how they do it, why they do it, and why the business needs get weaved into that. It’s not just to know the gaps in the blue team defenses (because you’ve felt those gaps from being on the blue team). It also helps when being creative with attacks and when setting up testing labs.

Development skills – This tends to be one of the harder places to get started. 1) Learn some language or scripting tool, 2) find ways to get practice, and 3) find more ways to keep practiced. It’s those last two that can be difficult and often takes real effort unless you have some corporate project set in front of you that you can use that knowledge against. The author’s point here is excellent (though I would add in some Bash knowledge): “Red Team candidates should at least script in python or powershell. Candidates who can build web apps, implants in C/C++, and manage infrastructure will have a huge leg up.” I really like the inclusion of being able to build a web app, maybe not necessarily an “app” as much as a dynamic web page, but along with that comes valuable knowledge in web architecture, server configuration, coding, SQL, etc.

Unique skills – I also like seeing this item, though it’s a hard pill to swallow for so many. But that’s the point of true red teams; a team of people who fill various roles and specializations. A team of people who all kinda do the same thing isn’t very efficient. Now, that’s not to say every person should come into a new team and be the absolute expert on a particular thing or technology or technique, but they should be the expert of that thing on their team. Until you find a good team to call home for a long time, it’s good to be broad and/or have things one is better at, but definitely look for those gaps in any team you interview with and see if you can fit those openings. Chances are good candidates can adapt and utilize their experience, integrity, and creativity to fill most gaps in a red team.

Lastly, I wanted to just flat out quote the author, “…if you can phish and think like a covert systems administrator, then you can probably be successful on a red team.” But also know that, …”If you want to end up doing red team work, then do yourself a favor and get a variety of roles and exposure before moving into red team — it will still be there when you’re ready.”