btlo lab recommendations based on soc tiers

Regularly over the years I’ve had opportunities to give advice and direction on new or growing cybersecurity folks. I like to point out books, certifications, courses, resources, and most importantly other practical activities to grow knowledge and confidence as we all forge career paths. I’ve recently discovered and been playing on the Blue Team Labs (BTLO) platform which has, as the name suggests, blue team-themed exercises, challenges, and labs. There are nearly 200 labs and standalone challenges on the site, some of which are very difficult while others are relatively simple to solve.

Rather than discuss the platform itself at length, Dimitry Bennett wrote an article about his experience on the BTLO platform that basically says all that needs said on the topic.

But, there is still one thing I thought was daunting about the platform: Where to start when one is pretty new to cybersecurity? And this is the challenge any time I talk to someone else about where they’ve come from and where they want to go. All of us bring to the table different levels of experience, knowledge, and comfort with various technical and even non-technical topics. Some of us are very inexperienced with Linux, or have never written a program or script before, or maybe have done very little Windows system administration, but know Linux like no one’s business. What I wanted was a quick cheat sheet on what to suggest to students who wanted to quickly get their hands into the BTLO labs without immediately hitting walls.

This page is meant to help me prescribe labs and challenges to security analysts I encounter that are looking to build particular skills or experience what common SOC tier expectations exist.

I do want to make clear that the SOC tier expectations and levels of knowledge is just my take on the subject. I’m not going to be correct on all of these, nor will I be correct for how every organization/environment defines the job duties and expectations of each tier. I’ve just given this a best effort in the context of the whole of the labs, since I’ve gone through every single one, and my own experiences over years in the IT and security industry.

I also want to make clear that BTLO does allow students a chance to see what they’re getting into. Every lab has a difficulty level set to it, the date it was released, the general tools expected to be present, and even the number of solves that have been recorded since the lab was released. All of these can also help guide students to maybe avoid things they may find frustrating.

Here is a quick key to some of the columns in my table.

  • Diff(iculty): Difficulty 1-10, 10 being hardest. My personal subjective value of how difficult this exercise is. Usually this is influenced by how much effort and knowledge may be needed to complete.
  • SOC: My gut feel on what SOC analyst tier level I would expect to complete these exercises. Some tasks are pretty normal for tier 1 SOC analysts, whereas some of the more involved analysis may be reserved for higher tiers. I add a “+” if this task kinda overlaps into a higher tier. As an example, analyzing an image of live system memory or a PE executable file is typically reserved for more experienced analysts.
  • Skills: My summary of the tools needed. If you don’t know Wireshark and want to learn more, then look at the easier Wireshark exercises. Of particular note, I make sure to list an OS if knowledge of or comfort using that OS is a huge help in solving the exercise. Adding “administration” to the OS is my way of saying that experience being an administrator of this server would be very helpful.
  • Notes: My very quick reminder about what the main point of this is.

INVESTIGATIONS (by difficulty & SOC level)

NAMEDIFFSOCSKILLSNOTES
Deep Blue11Windows, Event Logs, PowerShellFocused, easy, good lesson (use the tool provided!)
Indicators21Windows, OSINT, PowerShell, exiftool, notepadBasic analysis of a strange file that is likely malicious
PhishyV121Linux, web, emailMostly entry level, and good foundational skills
Bits21Windows, Bits, Event LogsGood lesson, specific Windows tool (bits)
Exposed21+GitFocused on git, a bit offense-like
SOC Alpha 121+ELK, Windows administration/attackELK, logs of common attacker actions on Windows
Miner21+Wireshark, Network Miner, networking, pcapsSome not-beginner concepts using pcaps
Replaced21+Text editor, OSINT, Visual Basic, codeVery straight-forward Visual Basic code analysis
Fingerprint21Wireshark, ja3, Linux (to use ja3)Pcap that requires filter use, external ja3 tool
Eradication21+Yara, Linux, joesandboxRunning yara rules on linux
Mon21Windows, sysmon, IRSysmon and malware IR on Windows
Print21+Wireshark, Windows, sysmon, printersFocus on Windows and printer tricks
RDP21Windows RDPFocus on RDP tricks
Defaced31+ELK, web logs, web attacksELK, but another way to look at web attack
Doctor31+Linux, web logs, web attacksWeb compromise on Linux system
SOC Alpha 231+ELK, Windows administration/attackELK, Windows logs of a network attack/malware actions
Exxtensity31+Windows, browser extensions/settingsGood focus on browser extensions
Joppers31+Javascript, WindowsNo frills Javascript parsing
Browser Bruises31+Linux, dumpzilla (python), browser historyUsing dumpzilla to analyze local firefox artifacts
Defender31Windows DefenderAll about Windows defender logs
Awwdit31+Windows Admin, Audit Policies, Basic PEFocused on audit policies in Windows,  basic PE dynamic analysis
Lintro31+Linux compromiseBasic Linux compromise and PE analysis
Xhell31+Maldoc, olevba, LinuxOld Excel maldoc analysis on Linux, oddball
Venom31+Linux logsAnalyzing linux logs for intrusion
Heaven32Windows, PE static/dynamic analysisGood into to basic and dynamic PE analysis
Stealer32DnSpy, basic dynamic analysisPretty much all dnSpy and basic dynamic analysis
Trash31+Windows terminalWindows and recycle bin tricks
Shortcut31+Windows shortcutsWindows and shortcut tricks
Link31+Windows adminFun with Windows and lnk files
Maldroid31+APK, Java, LinuxIntroductory analysis of an Android APK on Linux
Ducker31+Linux, DockerIntroduction to Docker on Linux
Pie31+Linux, web attacksAnalyzing Linux logs in Linux for web compromise
Backstage41+Linux, Linux logs, wiresharkLinux IR looking at logs and pcap
Crypto41+Linux, Windows admin, wireshark, volatilityGood intro to volatility and IR with various artifacts
SharpAttack42Pdf maldoc, javascript, LinuxPurely a pdf maldoc analysis
Kill42Volatility, Sysinternals, PE basic dynamicGood intro to memory analysis and exe dynamic analysis
First Day42IDA, OSINT, Procmon, pestudioStarting point for PE-based statis analysis, no debugging, OSINT
Logger42Windows, basic dynamic analysis, SysinternalsA few more steps into dynamic analysis
Honey41+Windows admin, RedlineA good first romp into Redline, gotta know Windows, though
Total Recall (R)41+Windows admin, RedlineUsing Redline to investigation a Windows compromise
Ben42Windows admin, filesys image, dynamic analysisSome Windows dynamic analysis tricks for malware
Sam42Linux, Windows memory w/ volatility, wiresharkGood romp into volatility and a Windows compromise
Obfuscated52Linux, PythonRequires some Python work, Lite Linux IR
Peak 252Linux, wireshark, sysmon (linux)Analyze logs in Linux of a Linux compromise
Bot52Linux, OSINT, CTF-likeLinux and some CTF-like challenges
Pandemic52Windows admin, PE dynamic analysisStraight-forward Windows PE dynamic analysis
Dot52Windows admin, wireshark, ProcDOTTricky ProcDOT tool to track an advanced process compromise
anDRE51+APK, Java, LinuxDeeper analysis into an Android APK (static still)
PE51+Linux, ELK, Windows adminMore ELK, a bit tricky with osquery logs
Pretium51+WiresharkTricky wireshark tricks
Invoice (R)51+Linux, ELK, Wireshark, Windows adminKinda easy Windows IR investigation with plenty of artifacts
Sticky Situation52Windows admin, AutopsyAnalyzing artifacts to answer questions about USB usage
Countdown (R)52Windows, Autopsy, IRWindows IR investigation with some tricks
SOC Alpha 351+ELK, Windows administration/attackELK, Windows logs of malware activities, just deeper
Hashish52Windows IR, OffenseIR on a local Windows compromise, requires some red knowledge
Too Late52Windows admin/attack, WiresharkTricky look at Windows malware compromise and artifacts
Test52Linux, Linux filesys imageIntermediate Linux IR and filesystem image handling
Rigged52Windows admin, Wireshark, IRIntermediate IR into a Windows compromise
Peak (R)62Linux, ELK, Linux compromise, linux logsLinux knowledge and using ELK, Linux logs
The Last Jedi62+Wireshark, CFF (PE basic static), RedlineWindows malware infection, lite PE analysis, Redline heavy
Baby62Linux, Linux filesys imageLittle harder than Test, but Linux IR and image handling
Exceltium62+Linux, pdf maldoc, shellcode analysisMore advanced pdf maldoc analysis on Linux, involves shellcode
Gotham62Windows basic PE static analysis, IDA, OSINTBasic static analysis of a malicious executable
LOL (R)62Windows, IDA, Python uncompyle, OSINTMore RE static analysis
Recovery62LinuxLinux IR investigation with linux logs and knowledge
Rekcod62Linux, DockerTricky investigation into Docker again
PhishyV262+Linux, HTML, Phishing, PHP, tiny bit CTFPhish kit analysis, web site analysis, coding
Multi Stages63Linux, wireshark, Windows admin, grepping memoryUsing Linux to investigate Windows pcap, memory of attack
Poor Joe62Windows admin, Volatility, logsWindows compromise investigation, kinda tricky, logs and live memory
Triage62Windows admin, Volatility, logsWindows compromise investigation, kinda tricky, logs and live memory
Hooked62Linux logsAnalyzing Linux logs/host that has been compromised
Eric62+Linux, volatility on Linux memoryA twist on memory analysis with a Linux image
Signal62Windows admin, redline timeline, pcap, basic PEA mix of involved pcap and file timeline analysis, basic PE
Irritate72+Windows admin, dynamic analysisLogs of fighting with dynamic analysis and CTF-like hunt
Pretium v272Wireshark, Packet Whisper, lite CTFAnswering questions based on a pcap
Covert72+Wireshark, PowerShell codingDive into a C2 pcap, powershell coding required
Wargames72+Linux, volatilityMemory analysis of a Windows compromise
Ghosted72+Linux, Wireshark (pcap), suricataInvestigating a web recon and attack mostly with suricata
Evil Maid82+Linux, filesys image, SIFT, Windows attackWindows file system investigation on Linux (SIFT)
The Key82+Windows, file system imageWindows file system forensics (and some offense)
Bad Logic (R)82+Linux, Windows admin, wiresharkLarge artifacts in a Windows attack investigation
Stuck83Windows attack, memory analysisWindows compromise with lots of tricky pieces
Divorce Court93Windows attack, filesys image, IDAAnalyzing Windows compromise, light debugging
Supreme Court93Windows attack, filesys image, IDA, C#/PoSHAnalyzing Windows compromise, debugging
Counter93IDA, debugging/reversingPure debugging/reversing, intermediate dynamic analysis
Multi Stages 2103Linux, volatility, Windows admin, MFT/TimelineHeavy memory analysis and file timelines; very difficult questions

CHALLENGES (by difficulty & SOC level)

NAMEDIFFSOCSKILLSNOTES
D3FEND11Google (D3FEND Framework)Looking up things in the D3FEND material online
ATT&CK11Google (MITRE ATT&CK Framework)Looking up things in the ATT&CK material online
The Report11PDF readerLooking up things in MITRE report (pdf)
Phishing Analysis 221Text editor, ThunderbirdAnalyzing a phishing email
Phishing Analysis21Text editor, ThunderbirdBasic phishing email analysis
Meta21Exiftool, OSINTAnalyzing some basic info from image files
Brute Force31Linux, text editor, grepAnalyzing logs of an RDP brute force attack
The Planet’s Prestige31+Email client, text editorAnalyzing malicious email plus office type attachments
Suspicious USB Stick31+Linux, peepdf, strings, VirusTotal, hex editorBasic analysis of a malicious PDF
Powershell Analysis – Keylogger32Powershell, Text editorAnalysis of a malicious PowerShell script
Log Analysis – Privilege Escalation32Linux, bashIdentifying malicious commands in a bash log
Network Analysis – Malware Compromise42WiresharkAnswering some basic questions based on a pcap
Log Analysis – Sysmon41+Sysmon, Windows, PowershellUsing sysmon logs to answer incident questions
Malware Analysis – Ransomware Script42Text editor, LinuxAnalyzing bash script for ransomware
Log Analysis – compromised WordPress42Linux, Apache logsAnalyzing a web attack from Apache logs on Linux
ILOVEYOU42+Windows, text editor, sysinternal, regshotDynamic non-PE malware analysis
Follina42Windows, OSINT, text editorAnalysis of multi-stage maldoc 0-day
Melissa52+Oledump, text editorNon-PE malware analysis
Shiba Insider52Wireshark, Steghide, Exiftool, LinuxUnwrapping layers of hidden data and common artifacts
Network Analysis – Web Shell52Wireshark, Linux and attacker knowledgeAnalyzing a Linux attack using a pcap
Malicious Powershell Analysis52PowershellParsing a Powershell script and basic obfuscation
Spectrum62Fcrackzip, Photorec, Audacity, efitool, steghideUnwrapping layers of hidden data in less common artifacts
Employee of the Year62Photorec, scalpel, CyberChef, Linux, stringsRecovering and unwrapping various file types
Network Analysis – Ransomware62Wireshark, OSINTAnalyzing and even recovering files using a pcap artifact
Memory Analysis – Ransomware72+Volatility, Windows, OSINTMostly entry level volatility analysis of memory image
Paranoid72LinuxAnalysis of linux logs to answer incident questions
Secure Shell72Linux, text editor, OSINTAnalysis of an SSH log
The Package7CTFOSINT, CTF, Math/PythonDon’t recommend. Clever CTF-Like math riddle.
Reverse Engineering – Another Injection73IDA (Disassembler), Sysinternals, API MonitorPE analysis and debugging, not entry level, but close to it for malware analysis anyway
Barcode World8CTFLinux, PythonDecode flag from 9000+ image files; don’t recommend
Browser Forensics – Cryptominer82+Linux, FTK Imager, Javascript, WindowsAnalyzing image file for browser artifacts
Reverse Engineering – A Classic Injection83IDA, Sysinternals, WindowsStatic and dynamic analysis of a PE file
Injection Series – Part 383IDA, Sysinternals, WindowsStatic and dynamic analysis of a PE file
Squid Game8CTFSteghide, image editorCTF-like image stego; don’t recommend
Injection Series Part 483IDA, GhidraPE analysis using debugger
Secrets8RedPython, JWT, Linux probablyRed team web app attack against weak jwt
Veriarty8CTFHashcat, Veracrypt, Linux, Thunderbird, gpgRecovery and decoding of files; don’t recommend
D-Crypt9CTFBrowserlingsDecoding a string several times with minimal guidance
P2SEC – Minigame9RedWeb App attacking, OSINT, exiftool, PE analysisUnguided multi-stage mostly red team basics; long
Classical City10CTFSanityDecoding ciphers – don’t recommend

learning and training goals for 2021

This is my fifth year tracking my learning, training, and certification goals like this. I am approaching my 20th year in infosec and IT, and through many of those years I sort of idled or just did my job without a ton of real planning. So, now I do that sort of planning to keep me growing and progressing and owning the direction of my skills and career.

This year is already starting out slightly differently. It’s clear now that the world is a changing place with COVID-19 still impacting socialization and work. Also, even if good times, it does not look like my current Director at work has any interest in extensive training options that I’d brag about on here. Also, I’ve reached a level where there are not as many certifications for me to shoot for. All of this means my choices this year are more informal and geared around learning certain things, rather than specific exams to study for. Also, with all of the uncertainty floating around, this year is also looking to be a cheaper year for me personally as well.

Updated 2/9/2021: I added AWS Developer courses and AWS SysOps Associate courses. I also think I might be packing this again, since preparing for the AWAE is going to be pretty time-consuming.

Formal Training/Certifications

AWAE (WEB-300)/OSWE from Offensive Security – It’s been a while since I’ve done a formal course with OffSec, and I think it’s time to get back on one now that they’re revamping and expanding their offerings. What I’ll likely do is spend some time looking at reviews and other testimonials to get an idea of some pre-course topics to brush up on, and then clear a few months of personal time to dive hard. I’d actually expect to do this exam as well.

Applied Purple Teaming (WWHF/BHIS) – I almost took this course last year, but backed out of it. I enjoyed the value of the course I took from this group last year, so figured I’d check in again this year on it.

Informal Training

Pentester Academy – I still have this subscription, and I’d like to get back onto some of these courses again. I still have SLAE on my list… I also would really like to commit to their red team labs, but don’t want to quite hold myself to it yet.

PentesterLab – I still have this subscription as well, and I’ll carve out some time at some point to progress further on badges.

Zero 2 Automated malware analysis course – I meant to start this late 2020, but life got in the way. I’m adding it to this list to make sure I get it going again.

Azure and M365 courses (900, 500 levels) – Furthering my Azure and cloud knowledge, I plan to take some courses on Azure and Microsoft 365, focusing on the fundamental and security tracks. I don’t have plans to sit for these exams, but I could always decide to do so.

AWS Developer Associate and AWS SysOps Associate – While I don’t necessarily plan to take these associated certifications, I would like to sit down and just casually run through 1 or 2 courses on each subject. I feel like there are things I can learn and use from these two. I’ll probably lean towards looking at offerings on Linux Academy / ACloudGuru or maybe PluralSight if they have a free weekend.

Other

Other one-off courses – I have a bunch of free and acquired courses in my possession that I need to get through at some point. It’s really about sitting down for a weekend or a series of nights and just going through them. No real intense time-spend, but enough to gain some knowledge. Courses like those from Port Swigger or Mudge or Autopsy or other topics.

Books – I continue to have a backlog of books to go over or skim through.

Python, .NET – I’d like to get some introductory exposure to .NET/C#, but this might be asking a lot of me without actual projects on tap to perform.

Certs to renew

CISSP – I’ll renew this again.

CCNA Cyber Ops – This lapses this year, and I have no plans to renew it.

reviewing my 2020 learning and career goals

The 2020 year was not one of the best years for a variety of reasons. My personal productivity was definitely a little lower than past years, but overall satisfying enough considering what a weird and crazy year this was for everyone. Here’s what I did or did not get accomplished.

Last year I started a cloud-focused learning journey by earning my AWS Cloud Practitioner and AWS Solutions Architect Associate certifications. I completed this push by earning my AWS Security Specialty certification in May. This was an interesting experience as I tested from home as COVID-19 restrictions changed how we work and live. This was an interesting journey as I feel like my certification is slightly ahead of my practical hands-on experience within AWS. But, we have to start and proceed somewhere!

That certification would prove to be the only one I would earn on the year. I opted not to pursue another “remote” certification, plus there appeared to be no interest in having a training budget at work any longer, which meant no SANS course for this year nor any real reason to give ISC2 more money.

In the later half of the year I did take a 16 hour course hosted by Wild West Hackin’ Fest: Breaching the Cloud led by Beau Bullock of BHIS. This was an excellent course over 4 days and my only regret was not taking full days off for these. Half days kept me pretty busy! This course flowed nicely into my recent forays into bolstering my cloud experience, particularly with Azure. And the focus on the offense side gave me a different perspective than my previous defense/builder studies. I would love to go through this material again for further reinforcement and practice in 2021.

I also spent a good amount of time in Pentester Lab this year, completing out quite a few of the badges: White, Yellow, Blue, Green, Orange, Serialize, Intercept, Android, PCAP, Essential, and Unix. This was a flurry of activity and learning this summer. I made progress into other badges, but have plenty of content to get back into as I get time. Still, this was a significant outlay of time and addition to my skills and exposure to make this a highlight of my year.

I also did some online playground activities as well. I solved most of the challenges through the summer on the BHIS Cyber Range. I participating with a work team in the Splunk Boss of the SOC competition as part of the Splunk.conf conference. And I also was invited into and poked around Offensive Security’s Proving Grounds beta, which gave me a chance to stretch off some rust on my penetration testing of boxes.

Overall, that was mostly my year. It didn’t feel as productive as other years, but I’ll give it a pass considering 2020 was quite a shift and change for many reasons.

they need configuration management…

There’s a lot of noise on Twitter. But sometimes, there are threads that harken back to the day of quality hacker and infosec forums. Like this one from @InfoSecMBz:

I love these sorts of thought exercises.

Going from next to nothing for 5,000 servers, and getting to real configuration management is going to be a multi-year process, and probably encompassing a full lifecycle for those particular machines. It just is, unless someone has the go-ahead to scorched earth burn it down and rebuild, or to slam in a standard and deal with the broken assets and resources for a few years of pain (and burnout of admins).

And let’s just be real here if we’re talking to this client. There are mature shops who do lots of things correctly, but still have poor configuration management. In a made-up scale of 1-10 on the road to mature IT and security practices, configuration management is probably around 4-5 to start, and 7-8 to really own.

Below are some of my bullet items. And yes, I know there’s a whole thread to cheat from, but in true thought exercise spirit, I’ve tried to minimize spoiling myself on all the other answers right away.

0. Discuss the scope, deliverables, definitions. Wanting to do “configuration management” can mean different things, and for a project that could take years, the specifics really need to be discussed.

  • For instance, is the desire to have a hardened configuration baseline?
  • Or just a checklist on how every server is built at a basic level?
  • Is it necessary to know and profile all software installed?
  • Does this include configuration management for all features, roles, software? E.g. IIS/Tomcat/Apache, etc.
  • What is the expectation to build on-going processes, audits, checks to ensure compliance? Is this even about compliance?
  • What is the driver for the customer asking for this, is this to adhere to a specific requirement or to eliminate an identified risk to operations and technical debt? Someone read an article or talked to a peer?
  • What is the vision of the future? Someone at some point needs a 1-year, 3-year, 5-year vision of how the environment is managed. “In the future, I want all servers to have a documented build procedure and security configuration automatically enforced continuously and all changes known and tracked.” Vision statements help contain scope, determine deliverables, and help define success.

I would start by breaking out some of the layers of “configuration management.” My assumption here is this post will cover the first two items, and leave the others for future maturity.

  • There is OS level configuration management, including patching.
  • Then there is management of software.
  • Then there is configuration management of things that live within the OS (software, features, services, server components..).
  • And then there is configuration management of custom applications, code, web apps.
  • Lastly, I also consider networking devices to be a separate discussion.

If a customer truly does not know what they want, I would say what they want is threefold:

  • They want to know their inventory/assets.
  • They want to patch and know their patch coverage metrics.
  • They want to know how to build/rebuild their servers to minimize ops risk/cost.

00. Plan the project. At this point, there should be effort made to plan out the project. The items listed below are not meant to be done 1 by 1 and only moving to the next after finished the first. There’s no way that project will complete successfully. Instead, many of these items can be run in parallel for a long period of time. There should also be milestones and maturity levels that are achieve as they progress. And there are questions of how to move forward. Should we tackle the whole environment at once, or should we tackle small swathes first. If we do a small group first, we can more quickly produce proof of concepts, and possible pull in other lines of servers later on. Or maybe we just stand up a good environment, and as server lifecycles kick in and servers fall off, their services could be brought back up in the “good” environment. All of the above are ways to go, and an idea should be formulated at this point on options to move forward and track progress.

1. Inventory. This needs to start with some level of asset inventory to capture what is present in the environment. What OS and version, where is it located on the network, what general role does it play (database server, web server, file server, VM host…), physical or virtual, and a stab at who the owner of the system is. This should be a blend of technical and non-technical effort and is meant to be broad strokes rather than fine-grained painstakingly detailed. On the tech side, scanning the known networks*, looking at firewall logs, looking at load balancer configurations, looking at routing tables and arp tables, dumping lists of VM’s from VM hosts. On the non-technical side, interviews with staff who own the servers and interviews with staff who use resources that need to be known. All of this information will fuel further steps. And I want to stress, that very few of the subsequent steps will see true success without this step being taken seriously.

(* This may be a good time to also have the customer introduce a baseline vulnerability scanning function. There is a lot of overlap here with a vulnerability scanner that scans the network for assets, tries to log in and do various checks, and enumerate patch levels and software installed. Or it might be time to implement a real asset CMDB or other system. Keep in mind each OS family will need some master “source of truth” for asset inventory.)

From here, we can start splintering off some additional tasks to run in parallel or otherwise. For the sake of simplicity, I’ll just put these in a rough order, but some things will start together, and end at different times.

2. Determine external accessibility. The point here is to quickly know the most at-risk systems to prioritize their uptime, but also prioritize getting them in line and known. Most likely these are the systems most needed to be up, recoverable, and secure. This will require interviews, perimeter firewall reviews, load balancer device reviews, and even router device reviews to map out all interfaces sitting on the public Internet, and how those map back to actual assets on the internal networks.

3. Start making patching plans. In the scenario above, they don’t know a thing about security. This tells me they likely don’t have good patching. And this is going to have to be dealt with before tackling real configuration management. Based on the OS versions in play in the environment, plans of attack need to be made to automatically patch those servers. If this is a Windows environment, for instance, WSUS or SCCM or some other tool needs to manage the patching. This step is really about planning, but eventually it will get into deploying in phases as well. Don’t overlook that major service packs and version upgrades technically count as patches.

4. Find existing documentation and configuration practices. Someone’s been building these servers, and something has been maintaining them to some degree. Existing practices and tools should be discovered. Staff who build servers from a checklist should expose those checklists. If they’re done by memory, they need to be written down. If some servers are Windows and there is a skeleton of Group Policies, those need exposed. If they are Linux systems and some are governed by Puppet, the extent of that governance needs exposed. If possible, start collecting documentation into a central location that admins can reference and maintain and differences can be exposed.

4a. Training and evangelism. At this point, I would also start looking at training and evangelizing proper IT practices with the admins and managers I interview. From a security perspective, I find a good security-minded sysadmin to be worth 3-4 security folks. Sysadmins help keep things in check by design. They’re the ones who will adhere to and promote the controls. If the admin teams are not on board with these changes, all of the later processes will break down the moment security isn’t constantly watching.

5. Change management. Chances are this environment does not have good change management practices. At a minimum for our purposes at the start of this big project, we need to know when servers are stood up and when they are decommissioned. This way we have a chance to maintain our inventory efforts from earlier. If there is no process, start getting someone to implement one (either manual announcement, to automated step in deployments, to picking them up with the network scanning iterations). One side goal here is to use earlier network scanning for inventory to compare against what is exposed through change management. If a server is built that is a surprise, it can be treated as a rogue system and removed until change management authorizes it. This process helps reduce shadow IT and technical debt due to unknown resources in play. It also helps drive the ability to know what percentage of the whole is covered by later controls and processes. If you don’t absolutely know the total # of servers, you can’t say what your patch % is, for example!

6. Analyze inventory. At this point it should be appropriate to analyze the inventory and see what we’re dealing with. How many families of OS are present, what versions are they, what service pack levels, and what patch levels. Which systems are running unsupported Operating Systems. We should have some pretty charts showing the most common OS’s in place. And these charts can help us direct where our efforts should focus. For instance, if 80% of our environment is Windows, we should probably focus our efforts there.

We should also start looking at major types of servers, such as web, file, storage, database, and other usage we have and those percentages.

7. Baseline configuration scan for each OS family/version. This might take some effort, but this is about seeing the damage we’re looking at. This does not have to be a scan that gets every server, but from the inventory analysis above, we should be able to pick out enough representative servers to scan with a tool we introduce and get an idea of what our current configuration landscape looks like.

Bonus points on this item if a standard has been identified and used as the comparison to see drift, but I wouldn’t consider that necessary quite yet. This is all about getting a baseline scan that we can look at a few years from now and see just how much improvement we’ve made.

8. Interview owners and expand inventory data. Start chasing orphaned servers (shadow or dead IT?) and get them assigned an owner. This also helps determine who is really accountable for various servers. This usually isn’t admins, but the managers of those admins who will be the ones who will end up needing to know about changes and authorize changes such as patches and configuration changes. Try to figure out if certain owners will be easy to work with, and others will be difficult, to help prioritize how to tackle getting server in line.

Just to note, at this point, we’ve still not really made any changes or done anything that should have impacted any services or servers. That will start to change now.

9. Patch. Expand change management scope to include patching approval and cadence. Synthesize asset information on system owners and patching capabilities. Determine a technology and process to handle OS patches, and start getting them deployed. This may take several iterations and tests before it starts moving smoothly, which may be half a year for so many servers. Try to make sure progress can be tracked as % of servers patched and up to date compared against your full expected impacted inventory.

10. OS Upgrades. At some point in this large environment, systems will need to be replaced or upgraded, or there are no longer patches available for is (unsupported). Start to plan and document the process to do server upgrades or replacements. This can be wrapped into lifecycle management practices. The changes from this tie into the change management process. And if you have really good build documentation for base servers, but also the role for the servers you’re upgrading, you can morph server “upgrades” into server “replacement with newer fresh versions.” This helps combat technical debt that comes from servers upgraded over many years where no one knows how to actually build that server if it got dumped.

11. Compare baseline against configuration knowledge. Think about comparing against a known secure configuration standard to find the delta. CIS benchmarks are great for this, and this step is only about comparing against something, but not yet making a process to get closer. For the most part, this is about comparing your baseline against the configurations you think your servers should be meeting based on interviews and how staff has built servers in the past. Actively start leveraging change management and configuration management tools to make changes in non-compliant servers. A major deliverable from this step should be your first versions of configuration standards.

Only now do we get to actual “configuration management.”

12. Implement configuration management capabilities. For the largest and easiest swaths of OS family/versions, start implementing configuration management tooling. For Windows, make sure they are joined to the domain, analyze and standardize Group Policy. Create a starting point and get missing systems in order. For Linux versions, get them into and managed by Puppet or some other solution.

13. Enforce standard server builds. The situation here is that servers are now patched properly, configuration enforcement tools are in place, and build processes are exposed. This means teams and admins should only be building servers based on known configurations and standards. This is a people process and making sure all admins are doing things the same way.

14. Implement process to improve the configurations. There are many ways to do this, but it all comes down to having a process to choose a configuration setting to change, test it, track it, announce it, and implement widely. This can be based on an end goal configuration, or just picking various settings and making them better; making systems more hardened.

Keep in mind this does not mean having perfectly secure configurations. You can try that, that’s fine, but it’s about having a process to continuously move towards that direct.

Further steps will tackle the other scopes, such as installed software, roles/features, settings within installed software, etc.

Lastly, this project should only be walked away from under the awareness by the customer that most of the above steps are introducing new functional processes that have to remain in place continuously in order to succeed. Once these processes are neglected, that function will break down, configuration drift will occur, and progress will reset.

passed aws security specialty exam

Last week, I took and passed the AWS Security Specialty certification exam. This is an advanced “specialty” certification offered by AWS centered around, surprisingly, implementing and managing security within the AWS cloud platform. This certification until recently required passing one of the Associate level exams, but today you can skip right to it if you think you can pass it. To renew, you only really have the option to take the exam again, at a reduced price.

Background and cloud path

I started my AWS cloud path about this time last year for two reasons. First, I wanted to stay current with my own skills and AWS wasn’t something I had the pleasure of supporting or playing in yet. Second, my company is in the process of moving workloads to AWS, and I wanted to keep up. In August of 2019, I passed the Cloud Practitioner. In September I passed the CCSK. And in December 2019 I passed the AWS Solutions Architect Associate. I started from Practitioner because I honestly was pretty fresh into AWS technologies and services, though not foreign to the concepts of the cloud after 18 years of IT experience. I could have tried to skip to the Security exam, but as someone painful fresh to working within AWS, I chose to do the Solutions Architect first, as many of the topics are foundational to the Security exam.

My goal has been not just to become aware of and conversant with AWS technologies, but to pave the way for hands-on pursuits, both personal and on the job. It’s been a bonus that the stars have aligned enough to allow me to learn on the job and build out a greenfield environment in AWS as we migrate into it full sail. I want to be able to understand how things are built in AWS, maintained in AWS, and secured in AWS so that I can begin to break it from the offensive side and then response to such activity efficiently. In the end, I want to be able to advise others on proper AWS security topics, both at a high level and also in the trenches.

And it’s a little self-serving; it’s a nice career safety net as well, much like my sysadmin skills and experience are to my infosec career.

Study plan

For the Security Specialty, my study plan started by learning the concepts for the prior exams. I kept a sick looking Gantt chart of my study plan efforts, but the Security specialty was definitely pretty stream-lined.

I also must mention that my original goal to pass this exam was Q1 2020. Unfortunately, COVID-19 concerns shut down testing opportunities and really stole some of my study time away, which elongated my efforts a bit. Thankfully, I don’t have other major plans for formal studying or certifications this year, so I had some personal wriggle room to push this into Q2.

As with other efforts, I started with A Cloud Guru’s course on the Security Specialty. This course is a little short on covering all the things you need, and it blasts through it quickly. And I have the same issues as I did with the Solutions Architect course where it’s just not polished, there’s plenty of mistakes that are left in the material, and it’s not nearly complete enough to rely upon to pass the exam. Still, it makes a relatively OK intro to the material. I’ll probably revisit this as review if I should renew the certificate in 3 years, since it requires taking the exam again.

My main effort centered around the related Linux Academy course by Adrian Cantrill. I really liked this course, and felt pretty darn prepared for the exam after it and after taking the Practice Exam (I scored 88% on my third and finally take of it.) This course is well over 40 hours, and does a good job being broad and deep enough to properly prepare for the exam.

Lastly, I spent time reading whitepapers, documentation, and FAQs off the AWS site on all of the security-related and core services I could. More importantly, I strongly suggest browsing the AWS security blog posts from 2017 through the end of 2019 to see scenarios and tutorials on how to do things like properly secure your root user, or incident response steps for a compromised key, or how to troubleshoot CloudWatch connectivity issues and various other common or weird scenarios. These scenarios and understanding how these work is extremely useful for the exam. As a bonus, truly read through these and follow along on the actual steps, or even recreate them in your own words.

I didn’t get a chance to use it, but if I had failed my exam attempt or started later than I did with my studying, I would have probably purchased the Practice Exam from Tutorials Dojo. My experience with Jon Bonso’s content was positive enough for the Solutions Architect that I would blindly pitch my money into this one had it come out earlier

The at home experience

I took my exam through Pearson Vue using the at home option. I thought the exam itself was stressful and difficult, but I have to say I think the at home experience was even more stressful to me. See, I live in an apartment and some of the rules of the at home component dictate a strict no sound posture. In fact, you’re not really supposed to even look away from the screen or make noise on your own! Thankfully, I have no idea what the real limits are, as I had absolutely no interruptions, noises, or contact with the proctor during my 1.5 hour exam duration. But, throughout my exam I dreaded noise in the hallway or a door shutting that would cause a disqualification!

Being in an apartment, I did quite a lot to prepare. I covered bookshelves, moved everything away from my dining table, and made an effort to minimize anything that needed scrutinizing. I had a USB web cam next to my laptop, but I was asked to move it to the top of the laptop anyway. I probably could have just used my built-in cam on the laptop. I never did hear anything from the proctor, only chat, and I have no idea if the proctor could hear me, as I typed my responses in and echoed them verbally as I did. The biggest instruction I had was to make sure a wall-mounted television in front of my was unplugged, which I had to quickly uncover the outlet and power cable to indeed show it was unplugged.

And while I apparently had no issues, it was definitely not relaxing taking the exam. Even halfway through, my head, neck, and shoulders were hurting from being all tensed up, and my eyes really yearned to just look up and afar for a bit while in thought on many of the questions. I’m pretty sure 30% of my mind was on my actions/behavior and not on the exam.

I imagine this is far, far better in a home where you can maybe go to a relatively empty basement room and keep pets/mates from making noise elsewhere, and not have to worry about much.

The exam

Basically, this exam sucks. I mean, it’s a good exam and really tests your knowledge of not just knowing how things work, but really digging deep to make sure you know or have done the steps in the many scenarios presented. I would estimate that maybe 40-50% of the questions were choose 2 or choose 3 answers. I’d guess about 70-80% were scenarios, and almost nothing was straight definitional.

Most questions were also pretty long to read and digest, and I found myself re-reading whole sections before even getting to the answers. And often the answers were lengthy as well. It was a splash of cool water when I hit short questions with short answers!

I also usually get done with an exam and I retain a litany of questions or items that tripped me up or I was happy to see, but in this case, I walked away at the end and had but one item to look up, and even then I couldn’t remember the context of the question!

Overall, I really liked the exam for what it tested against. This isn’t a light exam or something you can swing into with painfully little experience. You really have to understand how to do these things to get through it. I scored a 940 on it, and I’m extremely surprised and satisfied with that score.

That said, it should be kept in mind that this is still a multiple-choice exam. Even if a question is a big fat question mark, often one or two options will bubble up to the top and help formulate a decent guess.

What’s next?

Honestly, for AWS stuff it’s really about practical experience at work and on my own that is next. I will probably check out the Developer or Sysops Associate certs in time when I can apply those to renew the others. But otherwise, I have what I came for on the formal learning side of AWS and my immediate path to the dark side is now complete.

For cloud stuff, I’ll probably look at learning more about Azure through Linux Academy on my own time. And I’ll start focusing on topics that pertain to security and even penetration testing cloud deployments.

For security stuff, most of the rest of my 2020 was planned to be pretty informal, which works out well considering COVID-19 has changed things and put other things on hold so dramatically. I have a backlog of courses, tutorials, and other learning activities to do that would eat up years, so I want to chunk away at the juicier parts of that.

lab upgraded to esxi 6.5

I keep a lab at home, but I honestly don’t upgrade the underlying guts of it very often. I really got sick of rebuilding things in my early years as an IT admin. I like when things work, and as long as they keep working and my threat profile remains the same, I tend to keep the underlying infrastructure pretty much untouched. I’d rather wrestle and play with the VMs that run on top of things, ya know?

Typically, my upgrades come about when I change hardware. Or when something doesn’t work. Tonight, I tried to install Kali 2019.4, but I had some hard, unknown stops that felt like vm host limitations. Rather than fight with it, I thought I’d upgrade my VMware server.

My main lab is an Intel NUC device running a VMware ESXi 6.0 bare metal install. I really dislike the web management interface in modern VMware, so I’ve clung to 6.0 for about as long as I’ve been able to. I also really like having the option of running consoles from the vSphere Client application.

Upgrading the ESXi installation is about as easy as it gets. I verified some instructions and then went to town.

esxcli network firewall ruleset set -e true -r httpClient
esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep -i ESXi-6.5.0-2019
# I decided to choose ESXi-6.5.0-20191204001-standard and move on!
esxcli software profile update -p ESXi-6.5.0-20191204001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

[InstallationError]
[Errno 28] No space left on device
vibs = VMware_locker_tools-light_6.5.0-3.116.15256468
Please refer to the log file for more details.

Well, that sucks, but I definitely have room on my device. A quick search showed me that this is an updating error that I could fix by letting ESXi use diskspace as swap when needed, which it apparently needed for the upgrade. A quick visit to Manage > Settings > System Swap got me squared away and the above update command succeeded in surprisingly minimal time. Next, I rebooted the device. Then, I returned the local firewall option to false and I logged into the management console and confirmed my version was 6.5.

I then installed the VMware Remote Console application in order to use a standalone app instead of browser windows for console access. Either way, I dislike them, but the standalone app is the lesser evil. I downloaded version 11.0 from VMware directly, but it can also be grabbed when first trying to open a remote console off a VM.

My core VMs fired up just fine (pfsense and a jump box), I was then able to install Kali 2019.4 without issues at all. I have no idea what the real fix was, but I’m glad that a mere ~30 minutes later I’m past the issue.


how I track semi-formal study plans

Usually when I study for a certification or course, or even for comprehension on a topic, I have steps written down to check off on my journey to that goal. I’ve probably always worked off checklists, but it feels like I rely on them more as I get older, as there’s really no excuse to not use them and forget things or lose ideas to the ether. My time really does have a personal value, and I’d like to make sure I spend it well and efficiently.

When I decided to tackle learning more about cloud security, I knew the topic involved reading and listening to materials on a topic that I’ve not been highly exposed to. And I wanted to make sure I planned out how to spend my time so that I could plan the rest of my year and have an idea when to schedule exams.

The above screenshot is a sheet I maintain on Google Sheets. In it, I basically use a Gantt chart style format to track my progression of tasks and how long they will take. I estimate the hours involved,  record the actual hours I spent, and then the remaining hours and % hours used adjust automatically. The % Complete column I update manually. For instance, I may estimate 10 hours for a task, but find it only takes me 4 hours. I can then record 4 hours and still set it to 100% complete.

Do I really care how many hours are left? Not really, but it’s a way for me to practice skills for Project Managers-Lite and be familiar with a sheet like this. Lots of things that PMs do to perform and track projects are intuitive, pragmatic things that I can use for other purposes, even if I don’t know all of the specific terms in some Body of Knowledge.

And since this is just for my own personal tracking, there’s really no grading or performance evaluation based on how accurate or well I track this; it’s really best effort and its accuracy isn’t crazy strict. It’s truly just about keeping myself on target. (And, I suppose, it reminds me what I did on my route to a certification so that I can post about it later without much recollection effort!)

I will also add that one of the more important steps in pretty much every major learning effort I tackle is researching what others have done before me. This was a huge effort in something like the OSCP where I would read reviews and thoughts and threads from others who had experienced and passed the course/exam and what they recommended for prerequisite knowledge and resources to understand prior and during the learning phases. I still do this as much as possible, and it leverages strengths I have in effectively and efficiently Googling and sifting through information and then organizing and prioritizing what I really need to do.

learning and training goals for 2020

Every year I try to make some achievable goals for myself for learning, practicing, and getting certified in various topics related to my career in IT and infosec. I’ve been in the industry for over 18 years, and this is the fourth year I’ll have made and pursued concrete goals. In my early years, I learned a ton through informal self-education, and later on pursued a trickle of formal certifications. Then I coasted a bit, and have since made specific effort to formulate goals and plans to achieve them. More often than not, the number of things I want to learn and do far exceeds my capacity to pursue them in a given year, but I do try to make concerted effort to make progress forward through the backlog and keep my activities focused on some goals.

I have a bunch of options for this year, and with the way the year is starting out, I may have some fluid choices to make as the year progresses. For the first quarter at least, I have a solid priority that won’t change. From there on, I’m just giving myself some options while planning on doing some maintenance of skills and make use of the wide range of online labs and platforms available these days.

Honestly, I have quite the backlog of one-off courses, lab environments, challenges, presentations, and other things to do and consume that I don’t want to spend most of my free on-keyboard time in 2020 doing formalized training towards a certification. I want to keep free time and energy set aside to do these sorts of filler tasks, bits of learning, trying new tools, and chopping away at the large list of things I want to do, complete, or learn. Keeping the time free also lets me do things like sign up for a month-long lab (paid) if I so desire, without wondering if I’ll actually get to it in time.


Formal Training/Certifications

AWS Security Specialty (Q1) – The next step on my cloud journey, and really the goal of this journey is studying to understand this topic and pass the cert exam. I consider this one to be somewhat technical and a little hands-on, since I plan to work within AWS a bit more during the studying of this. I expect this to take about 1 quarter.

Either CCSP, CISSP-ISSAP, or CSSLP – I’m skeptical how useful the CCSP may be, and I’m not sure I’d make great use of the CSSLP. The CISSP-ISSAP domains also look pretty familiar and known to me, but it would be a nice progression to consider. Overall, I don’t need to commit to more than 1 of these this year. And no matter the choice, these are book-study activities where I may learn some additional tidbits.

Either AWAE (OSWE) or SLAE (towards CTP/OSCE) – I do like to mix in hands-on-keyboard activities along with book-study plans, and these would be very much hands-on events. I’ve long wanted to do the OSCE, and I’ve long slated SLAE as a precursor towards it. AWAE is new and I may get a little bit more worth out of it. Either way, I probably can’t do both of these in one year, and I really should get one at least started in 2020.

SANS course/cert – This item goes away if my work budget doesn’t allow for a choice of SANS course. If my choice of course/cert does get approved, I actually wouldn’t anticipate the preparation for the exam would take a full quarter; I’m initially planning just a month.


Informal Training

Pentester academy. I have this subscription and I should make concerted effort to fill some gaps in the above studies with some time going through these courses for understanding. With no exam or post-completion activities, these can be the sort of thing I sit down for a week or two and binge through.

Various specific courses signed up for. I have several free-tier courses I’ve signed up for in the past 6 months that I’d like to pursue. They’re nothing crazy intensive, but not something I can bang out in a weekend or even 1 full week. Hence, they get placed here. Doing some of these one-offs may be “important” enough to me to include in my goals in a more ad hoc fashion as the year progresses.


Maintaining or improving existing

Maintaining existing knowledge or skills is often a lot easier than learning something brand new, so I try to make use of this section before the list of new things I’d like to get to. The things I want to maintain or improve specifically: web app testing, linux, pentesting, forensics, powershell, Burp Suite.

HackTheBox and web app testing platforms and labs. Honestly, I can get plenty of practice by continuing to semi-regularly dive into HTB and dissect various web app testing platforms and labs. The platform of choice is usually Kali and Burp, and HTB challenges often can introduce chances to practice some scripting and forensics.


Informal new skills

Reversing – I have a couple books, free courses/tutorials, and other resources to use here.

Binary exploitation – SLAE and some HTB pursuits may start to give me confidence in this topic.

AWS – I plan to get more AWS experience not only with earning the next cert in that part, but also doing an AWS project to stand up a public wiki again. I had one years ago that I hosted, but when I moved to my current cloud provider, I just left the wiki behind. I kinda miss it.

Python – I have a bunch of small tasks and topics I can use as fillers and as excuses to do some more Python scripting.


Other

AWS Wiki project – A project just to stand up and utilize and maintain a wiki platform again, this time hosted in AWS.

Defcon – I’d like to attend Defcon this year, and if so, I need to plan this sooner than later!

Blog – I just want a reminder to keep blogging.

Pocket – I have lots of things sitting in Pocket that I should start consuming.

Career Goals – I should make a concerted effort to decide where my career should go and specifically what I want to do. This is basically a 5-year plan. This has always been hard for me, since I like doing almost anything in security, as long as I have support to do it.


Certs to renew

CISSP – Just my yearly reminder to declare a few CPEs just to keep up.

CCNA Cyber Ops – This actually expires 3/2021, so if I wait until 2021 to look into this, that’ll be really late! Renewing this probably means taking the exam again (not worth), taking Cisco CCNA R&S (marginal value to me), or taking a CCNP level exam. I’m inclined right now to say I will let it lapse, but I want to make specific effort to research this.

passed aws solutions architect associate

As a last act of 2019, I took and passed the AWS Solutions Architect Associate certification exam. The AWS SAA is the typical starting point for sysadmins and engineers looking to design, plan, and manage an organization’s presence in the AWS cloud environments. Other exams at this level are the Developer Associate for developers and SysOps Administrator Associate for a more focused dive into managing systems in AWS. (SysOp is such a great, cool term. I hope it makes a comeback over SysAdmin…) Each of those three feeds into more advanced Professional designations and also some advanced specialty designations like Security, Big Data, and Networking. None of these need to be taken in order, but they do build upon each other so it makes sense for students to progress up the chain in order.

In mid 2019 I decided to shore up a gap in my technology knowledge by diving harder into cloud concepts and security topics. I’ve spent about 17 years doing admin and security work, but I’ve not had a large chance to dabble in AWS until my current position. So, I’ve decided to upgrade myself a little bit in this regard. Since then, I’ve earned my AWS CCP and my CCSK designations. I decided to remain aggressive and hoped to get this AWS SAA before 2019 ended.

My goal with this track of study is really to study for and take the AWS Security Specialty certification exam, since…well…I’m a security geek! The CCSP is also on the roadmap, but mostly for its recognition and the fact it won’t really cost me anything additional to keep renewed along with my CISSP.

For study, I really kept to the same blueprint I use for most certifications. I start out by researching the exam, the exam topics, and what other successful students have reported and reviewed over the most recent years. Often, I do this by searching the TechExams forum, reddit, and then also Google. I write down various ideas and resources those students used, research those sources, and start to formulate a plan of attack. Sometimes, I’ll solicit some advice from some peers on Twitter, Reddit, or other media, but often I usually self-research.

I opened up with a 7-day free trial to A Cloud.Guru and blitzed through their AWS SAA 2019 offering as quickly as possible. At 12 hours, this wasn’t too bad. But, also at “only” 12 hours compared to the Linux Academy course at 54 hours(!), I assumed ACG’s offering wasn’t really going to get detailed enough to rely solely upon. Overall, this course at ACG makes a good intro, but the presentation quality and style definitely go up and down. Some sections of the course are recorded with lower quality equipment, which means section to section you can experience very different sound levels. This becomes pretty distracting, even annoying. Likewise, an editor must not have been hired, as there are pauses and even retakes within the audio that are still present. Overall, I felt I could trust the author, but I also somewhat felt like the author rushed to get this out and it’s just not that polished. The material, however, was solid. I did not do any labs on ACG. I did like the meaty quizzes at the end of each section, though the grammar on them is spotty and the reasons for the answers are at times woefully brief, sometimes just repeating the actual answer rather than a reasoning for it

Later on about a week before my exam, I would open another 7-day free trial at ACG just to consume their Exam Simulator, which is just a practice exam whose questions are pulled from some pool of questions. I ran through this twice, and only had maybe 1-3 repeat questions out of the 65 given. That said, the grammar on these questions was outright terrible, and I honestly felt dirty for going through the experience. Still, plenty of questions reflected the sorts of topics and questions I saw on the exam.

I then spent the bulk of my study time on the 54-hour beast of a course, the AWS Certified Solutions Architect – Associate Level (id=341) by Adrian Cantrill and hosted on Linux Academy. This course includes LA-hosted labs which performed very well for me and a supplement, The Orion Papers, hosted on LucidChart. I was initially very lukewarm on the LucidChart materials, but by the end of my study, I was actually referencing them regularly for refreshing and reviewing various topics. The course itself is excellent with a high quality of delivery throughout. I did not like the quizzes nearly as much, but they do reflect the material presented.

I took the practice exam at the end of this course, and also an older LA-based practice exam from the 2018 course. I didn’t like either of these practice exams as they seemed overly specific on various bits of knowledge that go beyond what you are expected to know for the Associate level, like calculating RCU/WCUs. I found both quizzes to be strangely pulling from the same pool of questions (or at least written by the same people and/or borrowing from each other), and overall found it frustrating.

About halfway through the Cantrill course, I signed up for a package of 6 practice exams hosted on Udemy by Jon Bonso (TutorialsDojo). I really liked this set of exams and found them to be challenging at just the right level, both while I was still completing my studying, but also in retrospect after passing my exam and thinking back to where overlaps occurred between the exam and these practice materials. I initially was scoring below 70%, but as I finished up the core of my studying, I was pretty consistently getting 75% on my first attempts on those exams, and 85% on subsequent tries. I reviewed all questions after each attempt, making mental note of reasons for questions I got right, and physically writing down notes on questions I got wrong (or just guessed on). I would then re-attempt one of the practice exams after a week or more. Even if you pay full price for those (which I think is $40), this set of practice exams is definitely worth it.

Despite plans to do so, I never really consulting the official AWS whitepapers, FAQs, or Best Practices for the various services. I would sometimes get into them very briefly when Googling answers/reasons for practice exam questions, but never sat down to comprehensively go over them. I also briefly looked at the TutorialsDojo cheatsheets, but I had expected really quick cheatsheets and charts and diagrams, but instead they were pretty lengthy, so I didn’t really consume them.

I also never really went into depth on my own AWS account or fired up any projects of any merit. I would still say 80% of my AWS hands-on experience before my exam was fueled by the Linux Academy labs. That said, my extensive general IT experience hosting critical web sites helped me with many troubleshooting questions and understanding some difficult concepts like using load balancers, traffic encryption, and network layouts. Someone with less IT experience should probably expect to do a little more hands-on work in AWS to prepare for the SAA exam.

Overall, I somewhat casually studied from mid-September until the end of December en route to my exam date. For other students, I’d highly recommend going through the route I did: ACG course, LA course (Cantrill), and then Udemy practice exams. I’d then suggest looking at the AWS Whitepapers, FAQs, and Best Practices to finish up. If you already know about how AWS works, concepts on why cloud makes sense, how AWS bills you, how AWS support plans are structured, and the general one-line definition of the most common AWS services, I think AWS SAA is the place to start. Lacking that knowledge, first taking the AWS Certified Cloud Practitioner is a great stepping stone into AWS knowledge.

The exam experience wasn’t really out of the ordinary. I scheduled my exam during winter break at the college I usually take exams at, so the whole atmosphere was casual, chill, and pretty dead overall. I spent a full hour on the exam, and that even includes flagging questions and reviewing the first 20 questions over again. I did not feel entirely confident in my attempt after the first 12 questions, but they seemed to ease up in the latter portions. I normally do not review or go back to previous questions in exams, but I did do so quite a bit in this one. Still, I don’t think I changed many answers at all. It is possible to go back and review every question whether you flagged it or not, which is nice. Passing means achieving a score of 720 (possible 100-1000), and I scored 836 for a comfortable pass.

Overall, I think the AWS SAA is a good certification to ensure that someone who does already or wants to start working within AWS to design solutions and troubleshoot issues is prepared for that task. That said, I have next to no practical experience in AWS (that’ll change!) and was able to pass, so I would say this exam is appropriate for people with 0-2 years of experience with AWS services. That also means possession of this cert may not attest to someone’s actual expertise in AWS, but definitely attests to having a grasp of the fundamentals enough to not be a clueless disaster. (And honestly, that can even be said about the CCNA or any other technical cert.) Despite that, I actually feel far more conversant and novice-level competent in understanding and doing things in AWS, especially in comparison to my pre-study state. I’m hoping future projects will fill in further gaps.

As intimated earlier, the AWS SAA is a stepping stone towards my real goal of achieving the AWS Security Specialty certification, so that will be my next step on this journey. I also have the ISC2 CCSP on my radar, but I think I’ll keep with the AWS focus for now, and plug in the CCSP later. Since the CCSP is more theoretical than hands-on technical, I am skeptical what I’ll actually learn from the CCSP, but I may end up surprised!

reviewing my 2019 learning and career goals

I really thought about not comparing what I did in 2019 with what my planned goals were, but then I realized that’s not useful to me at all. And there’s no real need to only restate what I did this year. I see I predicted that I’d be far too aggressive with my planned activities, and I was right! Still, I think it’s normal for me in this regard to over-commit to things and then accomplish what I can, rather than plan to underachieve and coast through another year. I used to do that, and I don’t really want to at this point in my career.

Rather than go through the full list, I figured I’d just pluck out the things I planned to formally pursue.

SANS SEC542 (GWAPT) at SANS East – Success! I ended up going to SANS East, earned a SEC542 coin, got first in NetWars, and later earned my GWAPT.

TBD Second major training: Black Hat USA Trainings or SANS SEC573 (GPYC) Python or SANS SEC545 Cloud – Failed! This one wasn’t really my fault. I aggressively (so to speak) requested budget for this at work, but that never came to fruition.

Linux+ – Success! I took and passed both exams before CompTia refreshed the cert and broke from LPIC-1, meaning I got the lifetime version from CompTia and still also got the limited one from LPI. Not only was this a goal for this year, but this is probably the last “certificate bucket list” item I’ve long had on my wish list from back when I didn’t even do this learning stuff regularly (thanks to a company and manager who didn’t value personal development).

SLAE (+ OSCE prep) – Pushed back! I don’t consider this too bad of a fail. I still want to start this track through to OSCE, but I also understand this is a labor of love more than it will benefit my career/work at this moment. It, again, will get on my list for 2020.

CCSP (Cloud) – Sorta Success! Honestly, this one morphed into something bigger and more formal than just pushing for CCSP. I’ve decided to make a concerted and bigger dive into the cloud security world. I pushed CCSP out to 2020 and instead earned my AWS Cloud Practitioner Certification and the Cloud Security Alliance CCSK. And since then, I have been hitting coursework and labs to attempt the AWS Solutions Architect Associate exam very soon. After that, my plan is to earn the CCSP and then the AWS Security Specialty.

Pentester Academy tracks (+Red Team Lab?) – Low usage! I haven’t given this enough love, just like I haven’t gotten back into HTB or other labs like I want to. I’m considering this a fail, and will be re-prioritizing for next year.

Linux Academy – Success! Hey, I’ve been making heavy use of this this year! I also dropped PluralSight as I wasn’t making heavy use of it.

Splunk Fundamentals & Power User – Dropped! I had wanted to pursue this, but this definitely was chopped off early. This is more of a work item, and my role hasn’t really allowed me to be in Splunk as much as others on the team have been. And that’s OK. I let this one slide to make more room for the cloud focus.

As far as my informal topics go, most of them just didn’t get as much love as I’d like to have given them. I’ve stuck to a few books that weren’t intensive time-sucks like The Phoenix Project, Tribe of Hackers, Tribe of Hackers Red Team, Red Team: How to Succeed By Thinking Like the Enemy, and Infosec Rockstar. I think I may repurpose “informal learning” into two paths: informal topics and maintenance/improvement paths.

I still attend SecDSM and BSides Iowa as expected, but I didn’t hit any other cons this year. I really should try to get to Defcon next year in the new digs…

the phoenix project, a personal path

Years ago, I became aware of the book The Phoenix Project (Kim, Behr, Spafford) and added it to my wishlist, but never actually picked it up. I remedied that issue over the past couple weeks by picking it up on Kindle and going through it. Rather than post a reaction or my thoughts on the book (at least for now), I just wanted to tell a small personal story that this book made me think about again.

Back around 2007, I worked as a sysadmin, and one of my main duties was supporting the servers hosting our critical web sites that developers developed. Thankfully we were already well into the virtualization takeover, but we were still using Microsoft’s Network Load Balancer tool to spread load across about 7 Windows Server 2003/IIS 6 web servers in one data center (the outfitted closet behind my desk). These sites ran .NET code using all sorts of virtual directories and COM objects tucked into corners of the server. And other things which I’ve thankfully lost memory of!

We had dev, test, and production environments, if I recall correctly. Deployments to dev and test took place Tuesday and Thursday afternoons, and would take several hours of manual work and testing to perform, during which time that entire environment was inaccessible to anyone due to the things needed to be done to support installations and configurations of IIS and COM. Part of the COM install was done by a homegrown tool built by someone I didn’t know and no longer supportable, but the rest was manual labor. And if one team needed a deployment, every other team pretty much had to feel that outage with the shared resources.

When I took this over, I immediately started doing a few things that seemed natural to me. I first made a clear checklist to follow for each deployment (know your work!), thereby removing the need to remember each step. I then started automating the pieces I knew how to automate using batch scripts to move files around.

At this same time, my company was also performing the implementation stages of a company-wide DR/BCP project. We added a second data center and my server farm was about to grow from 7 production web servers and about 4 dev and test servers into about 50 and more. We were also plugging in dedicated hardware load balancers as a much needed upgrade to NLB. And we then needed to solve file replication challenges when supporting two data centers that needed to fail over each other. Exciting times!

But this expansion meant I needed a new solution for deployments. Devops was still not really a thing. PowerShell had just recently come out, and I decided to try learning it in support of this coming build-out. I mean, no one wants to work for hours and hours just doing tasks that a monkey could do on servers.

So I created a PowerShell script that would perform these deployments automatically. My script would run on every production web server perpetually. They would all “check in” to a common configuration file and would “elect” a master who would do the controlling of another installation configuration file. When I needed something configured, my script would orchestrate the installation kick-offs with each other server in a predefined sequence. When a server received a command to do an install, the script would delete everything in IIS and remove all the other things, and then build them all back in every time. I had around 100 sites on these servers, and it was pretty glorious to see them all run through these installs for a few hours. I minimized downtime when possible (you know, database changes making this not possible) by utilizing the load balancer to know when a server shouldn’t have traffic, and when it was good to have traffic again. This was all replicated to separated (and expanded) dev and test environments as well as the servers on the DR site. Flipping over to a DR test was really just a matter of changing DNS and waiting a bit while the database then also failed over (pre-Availability Group days).

I solved quite a few problems with this setup. I lowered the amount of time an admin needed to spend doing deployments. I also lowered the amount of time overall for deployments. Deployments could be scheduled and run unattended at any time (weekends, nights). Outage windows were greatly reduced when they were even necessary. Most of the time, by orchestrating traffic direction by the load balancer, I could allow devs to do seamless deployments any time they wanted. I could scale this up (to an extent) to accommodate our expanded environments. I was able to achieve server consistency by not only removing human hands from the deployments, but because I rebuilt every IIS server, I eliminated those inconsistencies admins introduce when troubleshooting something, getting interrupted, and never getting back to set things back to how they should be. With a few networking exceptions, my dev environment was also comparable to the production environment, so if a developer could get their code to run in dev early in the dev cycle, it would also run in prod (none of this, “it works on my laptop!” crap). As a side benefit, no one could add something to the server that wasn’t part of the known build procedure, as the script would wipe it out or just not know to include it. And the script and its configuration file were self-documenting for what was needed.

Things were good, but they got better as time went on. When we migrated to Windows Server 2008 and IIS 7, I completely rewrote the script. I removed the need to pass a “master” token around and decoupled the script from the servers. I ran it on a dedicated system and utilized remote sessions to make changes on the servers. I also decoupled the actual copying of web code from my scripts and better utilized DFSR. This allowed developers to make simpler file changes within seconds if they wanted to. This also pushed management of “dev first, then test, then prod” pipelines to development hands, taking me out of that decision structure. I also made sure my script could install pieces and parts of sites rather than the whole server, if desired (will still keeping the ability to do a full clean and install). When moving to Windows Server 2012 and IIS 8, I again made smaller changes to improve support.

By the time I was done with the last iteration of my scripts, it was about 2013 and we ran that infrastructure until I left in 2016. We didn’t really dive too hard into devops, since we didn’t really have to. I had somewhat naturally found those concepts by improving delivery, improving consistency, reducing risk, and reducing my pain felt during deployments and in support of mistakes. No one should like to be forced into constant heroic efforts to keep the lights on.

Many of those lessons are buried in The Phoenix Project, which is really the same story of an IT shop in a (rather busy) company also discovering how devops improves IT operations. It doesn’t take an Erik oracle or threats of a business falling over to figure out how to improve operations or fancy production floor studies and terms to understand how to ease your pain and make things better. If you allow it, it should happen (to a degree) on its own as people manage their little fiefdoms more efficiently and reduce their own personal pain.

Had I remained with that company, I’m pretty sure I’d have next dumped my homegrown PowerShell scripts and done one of two things: Either continue with my fiefdom and implement more situated devops tooling like Ansible to manage the environment, or marry up to developers and their chosen packaging and deployment pipeline (their issue being they couldn’t get every team to decide upon just one).

The Phoenix Project has many more nuances; it’s like taking the IT issues of 50 companies over 5 years each and compacting them all down into one year of just one company. It’s a little silly, but it illustrates all the pain that eventually led many teams and engineers down the general path of devops. Which is still really just about keeping things in line with the whole utter point of IT: automation.

finding a quick and accurate state of security

Those who have done security consulting or auditing will probably answer this question far better and quicker than I. In fact, I bet there are checklists available that I could grab in minutes to answer this. Maybe I’ll check for some after posting…

Nonetheless, I decided to do a thought exercise with myself: What would you look at or do to discover the biggest information security issues in a corporate environment in a quick amount of time? It’s one thing to be on a job for a year and ferret out all the dark secrets, snowflake servers, and weak adherence to policy. It’s another thing to take a job interview or day-long interview with someone(s) about security posture (and more than likely get told what sounds good and correct).

But what would one look for to get a quick, accurate, and fairly wholistic look at the state of security, and thus formulate some findings and courses of action to tackle them? And I’m not going to take the easy route (necessarily) and list off the CIS Top 20 Controls, even though they’re a good place to orient the evaluating of an environment. I also want to avoid questions that few people can answer easily or are easy softballs, like knowing what data is on all mobile devices that might go missing, or that encryption is employed on all mobile devices.

1. Interview the technical people in the trenches. Ask them what the biggest security problems are. Not all of them will care about security or have any thoughts beyond their own job, and some will not be very open in group settings or with a manager present, but I have long been of the opinion that people in the trenches have a finger closer to the pulse than most management will care to admit. Find the subset of IT geeks that have security opinions, invite them to dinner and some beers/wines, and ask the questions.

2. Internal authenticated vulnerability scan that covers at least 50% of the environment and at least a sampling of every major Operating System (including workstations). There are some main goals here, such as seeing patch level and consistency, and configuration consistency in the environment.

3. Scan and analyze health of Active Directory. This includes not just looking at the objects, but permissions with a Bloodhound scan of AD.

4. Inventory scan of local administrative access (or equivalent) on all Operating Systems.

5. Percentage of confidence in these systems being accurate and complete: hardware inventory, software inventory, network and business systems diagrams.

6. The state of policies and supporting procedures documents relating to technical security controls. This is not talking about an Acceptable Use Policy for end-users or high level policy statements, but how detailed and easy these are to find and consume.

7. Describe the security awareness training offerings for internal employees.

8. Analyze network firewall policies/configurations. For this, I am looking at how organized the rules are, how tight they are, and how documented they are. What is the process to change them?

9. What are the next 5 projects related to security initiatives? If none, how many security employees are there? Basically, if someone doesn’t have security projects, perhaps they are in a mature mode with existing staff. If neither really exist beyond reaching for strange ideas that probably aren’t approved or backed by management, there probably is not much security emphasis, if any at all.

inventory is your bedrock to build everything else on top of

(This is an incomplete draft I’ve had for a while now. I don’t think I’ll ever complete it, but I didn’t want to lose it or keep it as a draft, so here it is.)

Daniel Miessler has a great article up: “If You’re Not Doing Continuous Asset Management You’re Not Doing Security.” You honestly cannot dislike that title, and the article itself is full of the points enlightened security folk should already have in their heads.

There’s a reason the top 2 controls in the CIS Top 20 Critical Security Controls are all about inventory. It drives every other thing you do in security, and without it, you’re managing by belief and never really sure if you’re being effective or not.

There are many different ways to tackle inventory, but here are some of the common ones:

  • workstation-class devices – This is usually one of the easiest to handle, since the team responsible for workstation procurement likely has an inventory of what they have in order to please customers. Being able to tap into this inventory list, or at the very least view it, is essential. For instance, how do you know you have Antivirus or endpoint protection on every workstation? You have to true that up with the inventory list. Think about the question, “How would I know something is missing security control XYZ?”
  • mobile devices (on your network and/or company-owned) –
  • servers – Typically, one team manages workstations and another team manages the servers. This team should have a handle on some beginnings of an inventory system due to licensing needs, storage/compute resource needs, and other OS-specific collections such as Active Directory or patching coverage. But the same question applies herre, “How would I know something got missed in inventory?” Or in the case of a largely Windows environment, “How do I find a new non-Windows assets that is stood up without notice?”
  • networking assets – This could include diagrams of the networks, both logical and physical when needed, for both wireless and wired networks. If the networking team manages it, it should be in this group.
  • all other network devices – This covers all the other things not nicely slotted into the above categories, like appliances or IOT. This also covers unauthorized device discovery. Essentially, if something is on the network, it needs to be found and known.
  • the cloud – The cloud is often a different beast, especially when consumed dynamically with assets coming on and off as demand moves. Worst case, you go through all other steps above over again with “cloud” in the front of it.
  • internal information systems/sites – This is about knowing the information systems that your business and users consume, which often comes in the form of internal websites, but could be other tools and systems. Largely this is defined by things that store/handle data.
  • software and applications – A huge endeavor on its own, but nonetheless important to know the software and applications in use and needed (and hopefully approved and tracked).
  • external attack surface/footprint – This is what attacks can see and will target; high risk and high priority assets and paths into the organization. This isn’t just Internet-borne, either, but could come in through other weak links such as wireless networks or VPN tunnels.
  • vendors – A good risk management program will have an inventory of all official vendors, which will fuel risk reviews and inform security of what is normal.
  • third-party services hosted elsewhere – What services does the business and its users consume that you don’t strongly control? This likely will still impact account management and permissions, data tracking, and evaluation of those services since you have some measure of intrinsic trust in them which is a potential risk for you.
  • critical business systems – This could be considered a little advanced, but it’s about knowing what’s really important to the business, which informs risk priorities, spending, and other activities like BCP/DR.
  • data/critical data – You can’t secure data if you don’t know where it is, and have some idea on what data is more important than others. Yes, this one is difficult outside of narrow compliance definitions (aka all data vs just credit card data). Honestly, this bullet item should be a top level category in itself.
  • authentication stores – This is about knowing what accounts you have, where they authenticate against (are stored), and what your users and systems actually use to do things.

There are different methods to find this out:

  • process/documenting – This is the default method shops will use to track inventory. If someone stands up a new box on the network, they update some inventory sheet or make sure they follow some checklist to include the new asset in something else (adding it to monitoring, patching, or joining a domain). This is a trust exercise, as you need to trust that every team member follows the process and every process is all-encompassing. This includes decommissioning assets as well. This should also include the assignment of ownership: who in the company is ultimately responsible for this asset?.
  • active/finding – Most of the time, security should assume the worst (trust, but verify), which would be finding assets that are weird exceptions or just get missed in the normal process. Active inventorying means looking out onto the network and scanning it, finding assets, identifying them, and pulling them back into visibility/compliance. The opposite is true as well, you want to find assets that aren’t meant to be there!
  • passive/watching – There are also passive techniques to find devices, such as watching all network traffic or alerting (and even blocking) unauthorized assets from accessing the network. This is still a fallible control, but it is part of the puzzle of knowing what is on a network.

There are a few caveats to the above. First, it’s not 100%; there may be a “bump-in-the-wire” or other passive device on the network (think a network tap just collecting data). There are also device peripherals (mice, keyboards, headsets, readers of various types…) Tackling this is a bit advanced. Second, especially with the active methods, this needs to be done continuously, or the controls need to be continuously active. If you do active scans once a day, an attacker or insider could still turn on a device, do whatever, and turn it off in time for the next scan. Handling these windows is why we practice continuous improvement and defense in depth and why we map out maturity plans.

And Miessler includes 5 questions that drive the measurement of a security teams based on how they answer them:

  1. What’s currently facing the Internet?
  2. How many total systems do you have?
  3. Where is your data?
  4. How many vendors do you have?
  5. Which vendors have what kind of your data?

consuming infosec news and social commentary

(This is just me publishing an incomplete post from the past.)

How do I consume and keep up with infosec news?

Twitter is an always-running stream of water. You walk up to it when you’re thirsty, take a drink, and then walk away. You don’t try to catch everything that falls through while you walked away.

Those of us who grew up with IRC don’t find that a weird or foreign concept, since it often operates the same. You walk away from IRC for a day, and then come back, but you don’t typically try to read the entire buffer of any busy chat. You look for a few topics of interest, perhaps, or highlighted key words, but otherwise you just sit back down, read a screen or two up, and plug yourself into the conversation as is. Busy Discords can work this same way when conversations are not split up into multiple channels.

Pet peeve: Breaking topics into too many channels too quickly. This setup works for very busy forums or messenger/chat locations. But it kills momentum for anything not super large. It splinters discussions down from a healthy rate to a trickle that won’t keep anyone engaged for long. It also requires its users to keep clicking through various channels for any chance to keep up with conversations. I think what happens is new Discords/Slacks/Forums see how the big established ones do it, and they try to emulate that structure immediately. But that doesn’t work in the end. You need traffic first before justifying the splintering. (As someone who has lived 90% of my adult life in some form of online community, and as someone who has run several of them years ago, I have plenty of opinions on this topic…)

music to learn and hack to

(publishing an old “incomplete thoughts” draft) We all have a preferred environment and/or music we prefer to hack and learn and work in. Most recently, I spent much time at home practicing and learning in the PWK/OSCP labs and exam, and often to a background of music. I thought I would share some of my interests in this regard. If you read nothing else here, at least go give SomaFM a listen, particularly their DefCon Hacker Radio and Groove Salad stations. I’ve been a regular listener of Groove Salad since around 2003, and it’s absolutely excellent.

When I’m heads-down doing something, most of the time I’m probably listening to one of four types of music.

The most common for me is “chill out” music, largely electronic, but could also be acoustic or traditional. This largely stems from enjoying new age music from the 80s/90s, which then expended into electronic music through the late 90s and on (think Enigma and Kitaro transitioning into Underworld and Sasha). Unless I’m listening to my own stuff, this is where I’ll tune into Groove Salad on SomaFM. I don’t remember how I found the station or why or what led me there, but it definitely solidified “chill out” as a thing that I totally dig. (And I totally geeked out when BackTrack used to include a SomaFM bookmark in their default browser!)

I’ll also enjoy other electronic music, but if it has more intensity or a beat to it, I don’t include it into the chill category, and instead gets lumped all together into my general electronic folder. This encompasses anything from classic trance, goa/psytrance, dubstep, edm, and so on. I tend to stick to my own collection when listening to this, but might queue up a large set of stuff on YouTube, or use SoundCloud to listen to some sets or DJs, or maybe Pandora, or Digitally Imported feeds on TuneIn Radio.

Sometimes, I’m in a really heads-down mood or just want something less electronic, and I’ll turn to either normal classical music, or something less orchestral like cello, guitar, or piano artists doing their own thing. Most of the time when I listen to this, I’m firing up TuneIn Radio and just listening to the Iowa Public Radio Classical station. No ads, decent quality, good variety. Failing that, there’s tons of long collections on YouTube to listen to.

Lastly, I’ve also always enjoyed hard rock which borders into metal, but never really metal. I tend to be pretty picky when it comes to this (Metallica, Tool, White Zombie which betrays my age…), but more lately I’ve gotten into symphonic metal bands. I still have plenty of things I consider to be “harder” rock music (basically anything more intense than “pop” music), and sometimes that’s my mood.