getting into and growing inside the infosec industry

A revival of sorts on content from BHIS on getting into the Infosec industry, including A Career in Information Security FAQ Part 1. Pretty good stuff! But this section really stuck out to me:

The customer service, tech support, help desk, etc., these jobs are crucial to forming a solid background in computer science. Learn how to solve problems effectively. Learn how to discern between useful web search results and wastes of time. Employers don’t want to hire you for what you know. I generally believe that anyone (some computer background) can be trained to accomplish digital tasks. I can’t train you to manage your time well. We can’t train people to be nice, treat others like human beings, or to be steady under pressure. And truly, those are the skills that will put you at the front of the line. It worked for me and everyone else at BHIS too.

I would include other skills such as asking questions, being curious, being tenacious, looking at ways to break and fix things, and having a quick mind to solve puzzles.

And to be honest, that whole post is a wonderful bit of encouragement and advice for anyone to read, newbie or jaded veteran. Things like, “That motto is ‘Fail Fast, Fail Often, and Fail Forward’. When you are working on solving a problem spend more time failing and less time analyzing the problem from a distance,” and “One of the most critical skills in information security is the ability to go off script.” That’s gold right there, alone!

Addendum: I do want to point out the question towards the bottom of the post about the biggest hurdles in first getting started. And it might be obvious, but it bears constantly repeating that the two biggest items are 1) experience, and 2) imposter syndrome symptoms. The former is just something you get past after a few years of work. The second is a lifelong personality and internal compass issue where we just have to come to terms with the scope of infosec and how no person can begin to swallow that whole ocean. Learn what you can, balance your life, fail fast, move forward, get better, succeed.

the cat and mouse game of security improvement

I don’t often find fairly general articles to have enough interesting nuggets and quotes to bother saving, but sometimes they just flow so well and include plenty of head-nodding things to agree with, all with wording that I appreciate. One such article came across from Dark Reading, Think Like an Attacker, How a Red Team Operates. Dark Reading seems to like limiting the ability to read articles, so I don’t mind being a bit liberable in pulling out quotes I like.

“The whole idea is, the red team is designed to make the blue team better,” explains John Sawyer, associate director of services and red team leader at IOActive. It’s the devil’s advocate within an organization; the group responsible for finding gaps the business may not notice. I just love that sound byte. I want that to be my elevator job description.

“The main function of red teaming is adversary simulation,” says Schwartz. “You are simulating, as realistically as possible, a dedicated adversary that would be trying to accomplish some goal. It’s always going to be unique to the target. If you’re going to get the maximum value out of having a red teaming function, you probably want to go for maximum impact.” The early part of the article goes a great job of succinctly comparing pen testing and red teaming while also illustrating how these have changed as time has moved on. Old school pen testing has shifted to be called red teaming as a way to further differentiate as pen testing has become commoditized.

The team ends up chaining together a small series of attacks – low-level vulnerabilities, misconfigurations – and use those to own the entire domain without the business knowing they were there, he says. Typically, few employees know when a red team is live.

Red and blue teams may work together in some engagements to provide visibility into the red team’s actions. For example, if the red team launches a phishing attack, the blue team could view whether someone opened a malicious attachment, and whether it was blocked. After a test, the two can discuss which actions led to which consequences. Beyond actually enjoying it, this is my whole value proposition for my interest in offense and red teams: It makes my defense better. Which makes me get better on offense. Which makes my defense get even better… Getting a root shell or DA credential is the addiction, the satisfaction is passing on the information to make improvements.

More and more companies are starting to realize if they limit themselves to the core fundamentals of security, they’re waiting for something bad to happen in order to know whether their steps are effective, says Schwartz. Red teaming can help them get ahead of that… Many companies are building red teams in-house to improve security; some hire outside help.

The main reason behind building a red team internally is because as it grows and improves along with defenses. As security improves, so do the skills of red teamers. Offensive experts and defenders can attack one another, playing a cat-and-mouse game that improves enterprise security, he continues. Internal teams are also easier to justify from a privacy perspective.

Overall, the pros argue a full red team can help prepare for modern attackers who will scour your business for vulnerabilities and exploit them – but they’ll help you stop real adversaries.

“The difference between a red team and an adversary is, the red team tells you what they did after they did it,” Schwartz says.

That’s such a strong ending to this article, that I had to pull a bunch out right there. Wonderful!

rapid7 releases 2018 under the hoodie pentesting report

Rapid7 has released the second edition of their now-annual “Under the Hoodie” report, which is a compilation of information and statistics compiled across Rapid7’s penetration testing teams. There’s really probably nothing terribly surprising in here, but it’s always nice to have some raw numbers of anecdotes in pocket for various conversations. Here are a few interesting tidbits or quotes I wanted to pull out.

“Relying entirely on an automated solution or a short list of canned exploits is likely to meet with failure, while a more thorough, hands-on approach nets significant wins for the attacker.” This statement has importance for internal security testing, third-party testing, and also for defenses. The first two can be obvious, but the last one about defense helps frame models, for instance the impact of an internal threat or an attacker specifically targeting a company rather than just automating a search for opportunistic moments. It also speaks between the words that an attacker with some hands-on effort and not time-boxed like a pen tester can see success.

“Furthermore, these results imply that if the penetration tester is not detected within a day, it’s unlikely the malicious activity will be detected at all.” Detection is a big deal. I’d also throw in the practice of threat hunting to find successful attackers who have gotten past the outer layer of defense and alarms. I recently deleted a draft about the whys and hows of the rise of threat hunting/intelligence (I posited it was a combination of the reduction in AV/IPS signature success, the complexity of environments, the rise of offense-friendly staff looking for offensive things to do, and other factors…). Prevention is important, but solid and effective detection matters.

“The number one issue that causes the most consternation among penetration testers is solid network segmentation. If they cannot traverse logical boundaries between environments, it can be extremely difficult to leverage a set of ill-gotten workstation credentials to escalate to domain-wide administrative privileges; even if a powerful service account has been compromised, if there’s no route between targets, the pentester must effectively start over again with another foothold in the network.”

Other factors that cause frustration for pen testers are multi-factor authentication for accounts, least privilege practices on accounts, strong patching and vulnerability management practices, and awareness to spot and report phishing campaigns, social engineering, and other low-tech attacks. What’s fun is how these 5 items are disciplines that blend security with other, very different departments: The network team for segmentation, systems/developers for 2FA/MFA, systems for patching, IAM for least access, and everyone for awareness. You can’t just boost one area of the company (or just security itself).

changes to the site – sidebar links are a bit of a relic

Recently went through and cleaned out dead links in my Feedly news feeds. Not only did this kick in plenty of nostalgia, but also reminded me that I should update the sidebar links on my blog! While going through these, I sat back and thought about how time-consuming this process is, how annoying it is to update wordpress themes (just give me a raw txt file that I can put code in rather than wrestle with weird interpretations and random carriage returns!), and for what personal purpose this even mattered.

In short, I need to sit back and think about what exactly I am doing with this blog site and how to make it better for me. Moving to hosted WordPress has helped with site maintenance, but has made other things more difficult. In the past, I always edited files by hand and coded things directly, but these days I tend to use the WYSIWYG, but it’s not usually quite what you see…it’s more like wrestling with a slippery eel to get things to look the way I want, rather than the way the themes want. This makes updating the sidebar annoying. At best.

There’s really four parts to my blog: the posted content, the sidebar link list, comments to posts, and the links at the top that spider out to other things about me, with this blog page being the nexus point where they converge.

The sidebar links
The extensive sidebar list of links has been part of the site’s identity since the beginning, but it’s also an old school relic.

The list is somewhat save-and-forget, except for some of the most-used items. The rest, I honestly forget are here. For some, it’s still just better to use Google to get the latest, greatest.

These links are also best used by me, and probably not clicked on by anyone else ever. The list is roughly doubled up in: feedly, podcast subs, youtube subscriptions, twitter follows, discord server memberships.

I do know that clicking links will place referal pokes to the targets…maybe. It’s one of those ways to get noticed, but I’m not sure blogs and/or comments are “noticed” anymore or really followed at all. A blog used to be your focal point online that other things revolved around, but these days the social sites have supplanted them. There is also so much flow these days, that I don’t ever really “catch up” on blogs I’ve missed. They’re much like IRC or Twitter; you pop in and maybe look at the recent buffer, but the rest of the log is in the past and there’s no reason to spend that time reading backwards.

The bottom line: the link sidebar is a relic with questionable value to me, and is annoying to update.

The comments
The comments are easily forgotten, since I don’t get many and don’t expect many. The problem is the lack of two-way discussion. Comments on blogs are often post-and-forget, never looking back for an update without specific effort to do so. It’s far better to follow and tweet to someone on Twitter these days, or in extreme cases, find someone on a discord/slack/IRC.

In the past, prior to all the social networks, blog comments were useful to expand your exposure. Comment on someone else’s blog, put your own link in the comment, and likely get a poke or comment back in return. Again, though, today that is better done in Twitter/discord and by posting content that actually is useful to be consumed.

To be fair, comments are cool, akin to a Like, but dialogue anymore is best done elsewhere.

The bottom line: Ultimately, comments are an after-thought these days on any but the most popular blog sites, like Krebs’ blog.

The blog contents
But that does bring up the question of why I should ever update the blog? I honestly don’t look back on many things. The biggest two reasons: 1) shows off my interest and 2) allows me to organize and solidify thoughts. I may not reference the post itself ever again, but the act of writing something out helps ingrain the information and thoughts.

It’s not something I really do for anyone else except me, and as a way to sort of demonstrate my interest/enthusiasm/participation in the greater communities.

The posts I most-often re-reference myself are the personal ones like my yearly goals and results, or links to really informative checklists and processes; things that I struggle with putting links to in the sidebar only to forget them!

The bottom line: I still like maintaining the blog and it does have personal value to me.

The personal link nexus
I can’t see this going away anytime soon except maybe on a github page with a similar list of links. The whole point is to act as a point of convergence for my “stuff.” A place to find my Twitter link, LinkedIn page, Github page, and just a little bit about me (that age-old bio or About page that I feel is still necessary to tell your story properly).

Being able to control this convergence is still an interesting deal, as it lets me decide whether I want my personal name attached to a particular screenname somewhere, but as I get older, I also care less except with my own personal threat models.

As a bonus, I still love my personal domain.

The bottom line: I still plan to use this personal domain and resident site to be my nexus here, and I think I’ll expand the links a bit to include Github and maybe some other spots.

Plans
The links on the sidebar …could…be put into a github instead of this site, and probably more easily updated, too.

I could use github to also save backups of things like my podcast opml, feedly feeds export, and so on. Things that are not sensitive or inherently private.

A github is at least easier to update. And while it might not fix anything about my list of links and its usage, maybe it’ll help me pare it down a bit. Better yet, if I have a feedly export, why bother with the blog/news lists?

There is also the choice of having a private github for a few other things. I definitely don’t want to make it a huge “backup” of things, since that’s what other file-sharing services are sort of for, but at least some of my online presence and “home” page can be tailored a bit in private.

bloodhound, measuring how exposed a domain is

Recently watched a talk about a tool I’ve known about for a while, and just haven’t gotten to in my to-do list. I used the output of the tool briefly on a HackTheBox.eu target to much success. And after watching the talk at SecDSM, I’ve gotten excited again about employing this at work someday.

Bloodhound by researchers at SpectreOps is a tool that exposes Active Directory permissions and relationships with the goal to achieve Domain Admin (DA) or High-Value access into AD to pwn the domain entirely and win the game. This might sound unexciting if you only think about accounts and groups and group memberships. But Bloodhound goes deeper and wider by looking at actual underlying AD object permissions and how those objects relate to various computers in the domain.

During the talk linked above, one of the best parts is near the end when they talk about metrics, and I really loved these metrics which effectively measure how much exposure the domain has and how much effort an attacker will have to exert to pwn the environment with regards to AD permissions. It also illustrates opportunities to detect the attackers.

  • Users with Path to DA (target: 5%) – The lower, the better, as you really don’t want to think that every user that could be compromised could lead to the end of the domain.
  • Computers with path to DA (target: 5%) – Same story here, you don’t want to think most systems are just a few hops away from DA. Even a single malware/phishing success is dire!
  • Average Path to DA Length (target: 5) – The longer the better, as you want attackers to go through as many steps as possible to get DA.

hackthebox progress over the summer, meeting and exceeding my goals

Part of the reason it took so long to take my GCFA exam was the splitting of my study time with Hack The Box progress. Earlier this year I bought into VIP access to HTB, and I wanted to keep practiced up with, and learning new, offensive skills. I did more than I was expecting, and after making some friends smarter than I, was actually able to far surpass my goals and expectations to achieve 100% completion and top Omniscient ranking sometime in mid August. I still have to go back and properly learn some of the things I found way too difficult to do alone (let’s face it, the best people are the best due to learning through teams, and the best red teams have multiple people): namely binary exploitation and reversing. I get how they work, but I need my hand held way too hard right now! I still also have the “optional” sections to complete (Fortress and Endgame), and I’d like to dive into RastaLabs and Offshore, probably in 2019.

Really, my main goal was to keep the skills and processes I developed in the PWK fresh, while also learning new and more advanced tricks and tools and techniques. And I feel like I’ve succeeded in that aspect. These skills get dusty and rusty if not practiced regularly, either on one’s own or while in the course of work duties.

passed giac certified forensics analyst (gcfa) exam

This past Friday I had the pleasure to sit for the GCFA (GIAC Certified Forensic Analyst) exam and pass with a 94% score. Quite the relief after a summer of (somewhat slowly) making study progress. In May, I attended the SANS FOR508 training at SANS West (San Diego). Shortly after, I took a bit of a break, and since then have slowly studied and gotten ready for my exam attempt. I’ve blogged about the course before, so I’ll try not to rehash anything. The course was my first SANS experience, and this exam was thus my first GIAC exam experience as well.

Did you take the practice exams? Yes I did. In late August I took the first practice and scored an 83% with only about 9 minutes remaining at the end. At this point I was pretty nervous, but I also was not quite done with my study plans, either. A week later I took the second practice and scored an improved 93% with 30 minutes to spare. They were definitely helpful to see the exam format, get familiar with the interface, and also get a feel for the question style and feel. The real exam felt extremely similar, and while the questions were not duplicated, they felt written by the same author(s) and with the same feel as the practice ones. For the second practice, I turned on the ability to see explanations for both correct and wrong answers, while on the first attempt I didn’t know that option was present and just saw my missed answers. Also, I limited myself to my books and my digital index with no spreadsheet searching functions; just scrolling and eyeballing. I also had paper nearby to write down any concepts I missed, or those that I got correct, but struggled with, for review later.

Would you recommend the practice exams? Yes! I probably could have passed if I had skipped them, but they did absolute wonders for allowing me some feedback on where I stood and gave me a chance to gain confidence and familiarity with the question styles. The practice also gave me two chances to test out my index, hone it, and become even more familiar with the books, adding to my efficiency in an exam where time is precious. Most importantly, this whole study process helped me grasp and “get” the content so much better than just the course alone.

Did you have your own index for the exam? Of course! My goal with the index was to use it to not necessarily answer every question for me, but to give me enough information to come to a probable conclusion, and to then point me to the correct places in the materials to confirm that answer. My true place for answers is the books, and I wanted to provide enough context to be able to look up the appropriate information in the right place when I came across a term or subject in the exam. My index ended up being about 45 pages landscape, with 1536 rows at 8 point font. Having it top-bound was wonderful (about $13 printed online at Fedex/Kinkos).

When creating my index, I started out with a spreadsheet tab for each book. I had four columns: SUBJECT, TERM, DESCRIPTION, BOOK-PAGE. In retrospect, the SUBJECT column was never used by me, and I’ll leave it out on future exams. For the spreadsheet tabs, I’d leave the notes in chronological order. On a separate MASTER tab, I would regularly copy/paste the other contents into it and sort by the TERM column to see my MASTER index. This MASTER tab was what I would later print out.

If a term appeared more than once, it would get more than one entry. I didn’t want to squish BOOK-PAGE numbers into a single row at all. For multiple page mentions in a row, I’d make highlighter arrows in the books to prompt me to look ahead if the topic continued. If a topic had multiple terms or an acronym, I’d include all of them in their own entries. I would try not to do the whole “See Topic X.” I did early on, but hated it, and went away from that later (the one time I came across such an entry during the exam, I cursed myself). The goal was to go from Index to Books, not Index to Index to Index. I tried to be complete enough in general in the Index, but invariably questions would ask for very detailed specifics. And I didn’t want to solely trust myself to transpose the terms correctly, so I didn’t try to be exhaustive; as said earlier, get to the books efficiently! I also indexed terms on the blue and red posters. (Both of which I used in the exam, though much of the information can, in fact, be found in the books.)

I initially limited myself to a single line of description per term, but eventually I acquiesced and allowed myself multiple lines (hold Shift when pressing Enter while in entry mode to add a newline inside a cell). My index would have been longer and even more immediately useful had I not decided that pretty late.

I also used sticky tabs at the top of the books to mark key pages and sections. This way I had the option to skip my index altogether if I knew what general section I wanted to flip to. I used them a lot, too, not just during the exams, but when studying as well! I honestly think doing this saved my butt.

To be honest, I’m a natural information organizer. If I were more of a social person, I’d probably be a project manager! I’m also a note-taker, so doing this index was a loving exercise, rather than a chore. It also helps to remember that this index is a one-time use item. It doesn’t need to be perfect or pass muster for inspection by an editor. Everyone has their own level of perfection they need, but I know my index isn’t without mistakes, has holes, and maybe has more or less than it should. But that’s why I wanted to make sure it led me to the books as much as needed; trust myself, but verify the answer!

What was your study plan? After the 6-day course in San Diego, I probably took a good two weeks off. After that, I started going through the course books again. My goal was to read every word of the books (slides and notes). And yes, that took a while. I would highlight orange every tool mentioned in the books, and write it into a separate notebook of mine (my own personal list of tools). I would highlight key topics and statements with a green highlighter. After about two books, I actually started adding key terms, concepts, tools, and topics into a spreadsheet to begin my actual index. I then went back and caught up the first two books with a quicker pass.

Once done reading the books, I accessed the On Demand content to listen to the lectures again, follow the slides, and follow along in my books. This essentially was another pass through the material, and a second full pass to populate my index with things I missed or wanted to flesh out. For instance, I didn’t decide to put full command examples until my second pass. While winding down the On Demand materials, I also started going back and doing the lab exercises again, at least as much as I could (some tools expired). (I did *not* actually include the exercise workbook notes into my index, and I wish I had done so.) Doing all of those above really helped cement the material in my head, but also caused me to really actually *get* it, if that makes sense. Context fell into place, reasons for various things, and it just all feels natural and confident now.

In the day or two before the exam, I limited myself to just flipping through the books. I took the early part of that week off, and doing this allowed me to get familiar with the tabbed sections again, for quick reference and flipping to my tabs.

How was your actual exam experience? Pretty good! I got in early and got going pretty well. The exam itself is a brutal slog of 3 hours, and I definitely made plenty use of it to be as sure of my answers as I could be in a short period of time. Even with my index, there were a few questions that had me somewhat stumped or utterly unsure where to look for that information. Thankfully, in other respects, my index may not have had the proper information, but my knowledge of the books would lead me to the right sections. The exam questions were some of the best-written questions I’ve seen. To the point, clear, proper English, but you still have to read them carefully to pick up on any twists or tricks afoot. Honestly, the questions and answers were wonderful and did nothing to detract from the experience and ability to demonstrate mastery over the topics.

Is there anything with the materials brought on-site for the exam that you’d do differently? Without getting too specific, I think it would have been useful to better document or print screenshots of the output of the tools mentioned. Not all of them, since there’s a ton! But any of them are fair game for questions. Ideally, it should be enough to have used any and all tools during the labs or self-study when re-doing the labs. But that does take effort, as the labs themselves will not use every plugin and tool mentioned in the books. I also am not sure how one would consume such print-outs efficiently while taking the exam, so maybe I was better off without them!