If you want or need to pull data of Tenables ContainerSecurity Solution which is part of the Tenable.io offering and aim to use the pyTenalbe API Wrapper then you might have stumbled over the fact that the associated documentation for the pyTenable Wrapper is not aimed at beginners and is not providing any complete example Scripts at all.
Search no more! Here is an example-script how to import the ContainerSecurity Wrapper, authenticate to Tenable.io/ContainerSecurity and pull all reports / vulns for all scanned containers:
from tenable.cs import ContainerSecurity
yourAccessKey = "xxx"
yourSecretKey = "xxx"
cs = ContainerSecurity(access_key=yourAccessKey, secret_key=yourSecretKey)
image = cs.images
for image in cs.images.list():
report = cs.reports.report(image['digest'])
Hope this helps anybode out there starting to automate their vulnerability management process!
If you ever need to deploy a Tenable.sc in an Airgapped or otherwise online environment and need guidance on how to implement automated Pluginupdates this is the righ blogpost for you!
Note that you will require a valid Tenable.sc subscription and with that comes:
A license file matching the hostname of your Tenable.sc host – which can be applied via the normal admin webinterface or during the setup wizard without any internet connectivity
A plugin activation code which you do not apply in the Tenable.sc admin interface in an offline setup. Make sure to not activate the key (for example by temporarily connecting the sc to the internet) as a already activated key will not let you download the plugins via the Offline Download Website!
The Download of the Plugins is rather straight forward. On the internet facing side of your airgap you can automated the download of the Plugins quite easily using curl or wget following the following documentation provided by Tenable:
If you are using the Tenable Core appliance do not be discouraged by the following paragraph in the documentation:
You can just scp the most recent CentOS 7 Nessus Installer to the core appliance and follow the steps provided in the documentation if you are on the Tenable Core appliance as well.
This procedure is only to generate a challenge code which is probably used to sign the Plugin Package so it will only work on the intended system – probably to prevent License violations.
Data Transfer over Airgap
The mechanism to transfer the Plugin and SC Feed tar.gz files is not part of this article. Use whatever Data Transfer you have in place to either:
Place the plugin Update files on the Tenable.sc underlying System itself – or
Place the plugin Update files on any System that can reach the API of Tenable.sc
Applying the Plugin Updates to SC
There are multiple ways how you can script to upload and process the Plugins in Tenable.sc:
1. Update the Plugins via CLI / PHP
Probably the easiest way is to just apply the Updates on a scheduled / cronjob via simple php executions in a shell script:
su - tns
/opt/sc/support/bin/php /opt/sc/src/tools/pluginUpdate.php /tmp/sc-plugins-diff.tar.gz
/opt/sc/support/bin/php /opt/sc/src/tools/feedUpdate.php /tmp/SecurityCenterFeed48.tar.gz
For this the Plugin files obviously have to reside on the System that Tenable.sc is installed on or otherwise accessible from that system in a mounted share or similar! The Same of course applies for Passive and LCE / Event Update files if you are on SCCV!
A successful plugin update will look like this in the Tenable.sc log:
2. Update the Plugins via API using the pyTenable API wrapper script
If you want to use the API to upload the Plugins I recommend you use the pyTenable API Wrapper which will allow you to use a fairly simple python script:
from tenable.sc import TenableSC
sc = TenableSC('172.16.121.133')
with open('sc-plugins-diff.tar.gz', 'rb') as plugfile:
with open('SecurityCenterFeed48.tar.gz', 'rb') as plugfile:
No this are not the IP, Username, and Password of a productive Tenable.sc System! :) Also rather use API Keys now that they are available in Tenable.sc as well:
A successfull active plugin update with the above debug logging activated will look like this:
and it will look like this in the Tenable.sc log:
Using the API you can chose to call the API either from the Tenable.sc machine itself (including the Tenable Core Appliance which comes with python preinstalled) or perform the API call from a different system – for example a central update system in your airgapped environemt.
Getting pyTenable and dependencies on the Airgapped Tenable.sc System
Next Step – how to get pyTenable and its dependencies onto the airgapped Tenable.sc host?
Luckily the Tenable Core Appliance comes with pip3 and python3 preinstalled so its rather simple:
First we use any internet connected Linux or macOS System with python3 and pip3 installed to download pyTenable and all of its dependencies and package it into a tar.gz file:
Now transfer the wheelhouse.tar.gz file to the Tenable Core Appliance (or any other Tenable.sc CentOS installation with pip3) and install the pip3 packages offline with:
tar xzf wheelhouse.tar.gz
pip3 install wheelhouse/*
Which in turn should look like this:
et vóila – now you can use pyTenable on the Core Appliance:
There is probably no point in doing all of this for only a simple plugin update as the php cli way explained above will only take one line and everything is already there on the Core appliance.
However if you want to do more complex automation on the Airgapped Tenable.sc Host like automatically import .nessus scan result files from not directly connected nessus scanners than a python script can make sense at some point.
With these simple steps you can ensure that an offline Tenable.sc system is receiving scheduled Plugin Updates automagically!
I hope this helped at least one person out there! Have fun!
Which will stop the LCE, delete unnecessary files and then ask you silo by silo the delete the oldest silos until the disk usage goes under 90% again.
In conjunction with the archive repo script this should help “recitfy” all disk space issues in the Elastic based LCE 5.x Versions!
Pay especially good attention if your Archive repo is on the same partition als the active database (which makes no sense but is the default if you have not designed a special archive partition/dont need archiving):
In my case in this constellation the LCE is doomed to fill up diskspace when the HDD goes over 90% usage, das this will prevent archiving which will in turn prevent disk automated disk trimming.
So ideally make sure you have a dedicated Archive Partition and if not set the limits so that the active database does not fill the HDD > 90%
If you run into a LCE 5.x with a filled up disk use the script above and if broken, the Archive Repo and get it under 90% again so it keeps selftrimming!
If i missed something or you have problems feel free to use the comments below!
for some reason Tenable as no Community/KB Articles about the Troubleshooting of the Elastic Stack used in the 5.x branch of LCE so I want to share what i learned today and make it google serachable as a solution:
Due to a disk fillup i was presented with a LCE in version 5.1.1 that was logging:
What its about: The Life of Edward Snowden of course focusing mainly on his leaking of NSA and other States ecrets.
What stood out for me: You may think about Edward Snowden what you want but I found his Story very interesting including the part before he became infamous. Also you get a narrated version of many of the leaked secrets and thus learn some aspects that you might missed in the news reports about his revelations. Furthermore the part about NSA Analysts beeing able to see people typing search queries into Google letter by letter and spying on their (ex)partners.
What its about: Ross Ulbricht singlehandedly created the Silkroad and managed to evade Lawenforcement for years! The book describes how he got to this point and how it got to his arrest.
What stood out for me: Many details like how Ross Ulbricht wasn’t really a hardcore techie, how he grew magic mushrooms to list the first drugs on the Silkroad and of course my favorite part of the book was his arrest – which im not going to spoil here!
What its about: The Story about Stuxnet and one of the first publicly known military Cyber Attack instigated by the USA and Isreal against Irans Nuclear program.
What stood out for me: The wholistic description of every aspect of the story starting with satelite photos of the building site of the Natanz uranium enrichment facility. Also the part about the centrifuges that get ripped apart under their own movement and how Iran tried to cover this up from the IAEA by encasing broken down centrifuges.
Ten Arguments for Deleting Your Social Media Accounts Right Now – Jaron Lanier
What its about: The Infosec link is not that strong for this book however it is still refers to the Cambridge Analytica scandal and shines a light on how careless Social Media companies manipulate us.
What stood out for me: The in depth description of how exactly social media companies manipulate us and how they have no qualm to destroy society to earn a quick buck! And yes I really deleted my facebook account after finishing this one!
So as a Tenable Partner we have a Lab-License for the Tenable Product Suites which we often use to test new Products, Features, Updates and recreate issues in the Lab for further analysis. For this reason I was again setting up a Nessus Network Monitor in my Home Network with Mirrored Traffic from a Switch Uplink.
If you want to perform cheap and rasy Port-Mirroring at home you don’t have to rob a Cisco Dealer! Netgear offers cheap and functional switches that worked pretty good in my HomeNetwork so far.
With the Netgear GS108E you can grab a 40€ 8-Port Gigabit switch – which is Manageable and lets you just specify the port mirror in a Webinterface – on Amazon.
And with the Netgear GS110EMX you even get 2x 10G-Ports for 217€ – again with a Simple Webinterface for Portmirror Setup – on Amazon.
Please Note that I don’t want/get any incentives if you go to those Amazon links! They are just for your reference and you can buy elsewhere and chose other manageable Switches as well!
Also of course those are consumer Grade switches not intended for Enterprise Usage! I however find the value exceptional for Home Lab Port-Mirror Setups!
Just for reference – I set up the Mirror for the Switches Uplink Port which goes to another story of my house where the Internet Router is located. As there is other Stuff Connected on that level as well the Port-Mirror was not perfect, but I was not looking for a perfect setup, just some traffic to test on and play arround with.
The Perfect placement for a Nessus Network Monitor would be directly on the Internet Breakout (internal Side of the Router or Firewall to reduce Noise) and additional Sensors/Nessus Network Monitors in front of Sensitive Systems or VLAN choke Points – for example Production VLAN Uplinks.
The Evil Outdated Chrome User-Agent
When I got the Nessus Network Monitor set up and traffic mirrored I was greeted with a couple of interesting Vulnerabilities:
I know the System! Its my *cough* Windows HomeServer 2011 installed on a neat HP MicroServer N36L (wildly outdated, but new versions are available). As Windows HomeServer 2011 is long out of support this is a horrible system to have running! And I thought I probably now have the kick in the ass to do it quickly!
But let me back up – at first glance I was recognizing outdated Browser and Chrome Vulnerabilities captured by Nessus Network Monitor. A couple of years ago I had an old Android tablet in my gym for playing music. All separated away in an IoT WiFi Network that is limited by ACL’s to only access the Internet.
This tablet was running old vulnerable Versions of Chrome as well, but as I only used it for Spotify and not browsing the web at all I let it slide…
The thing is: I replace that Android tablet with an (old and yet again vulnerable) iPad i had left over. So I looked into it and as you can see above identified the Chrome coming from my Windows Home Server 2011 *cough-again*
Before hunting for the old Chrome Installation (and btw – what is it doing accessing the Internet without me using it?!?!) I took a Peak at a specific Plugin Output to identify the exact Chrome Version:
A specific Chrome Version Number in the User-Agent is pretty good indicator its Chrome. But Version 47 is really old…
One of the Chrome Vulnerability Plugins (9083) that you can see on the Right was reporting CVE IDs from 2016 so it was probably even Older!
Googling it you will find references pointing to a release in Dec 2015.
This got me a bit worried: I dont run old Browser Versions on Computers that are quite capeable to Updateing and Running the newest Browser Versions.
Also I have all Browsers on Auto-Update – but then Again, a forgotten Browser that was never Used could lurk in an old Version on my Homeserver….
The Part where I got worried
Mhhh… Never beeing used equals old version. User-Agent beaconing out to the Internet equals usage…
I got really worried when I connected to the system and:
THERE WAS NO CHROME! 🙀 :shockedcatfaceemoji:
This got me thinking that when I would let Malware beacon out I would use a commonly used User-Agent to mask my C&C traffic to the Internet! Overdoing this by choosing far to old or non-existing User-Agents is stuff that happens! Also as this system has been running for years this could be an indicator of a really old compromise.
Now I was at a loss! I feared Malware compromise but I did not know what process called out to the Internet…
Ofcourse there is plenty of options with windows to investigate this but out of the box if you have not set up anything prior I found out it is pretty hard to investigate this!
This is when I hat to learn again that Incident Response works far better if you plan ahead and deploy tools and logging prior to an incident!
Let the Hunt begin
But as I had a Nessus Network Monitor deployed I had at least some logging capabilities now!
Sadly out of the Box you only get the vulnerability Plugins that triggered in the GUI but it is possible to Log a detailed Realtime feed of all Plugins Triggering with some more Information than beeing displayed in the Webinterface.
You can set up this realtime-logs.txt file in the Webinterface Settings:
This will create a plain Logfile on the Disk of the NNM system. Please note that I created a lot of the screenshots while writing this Blogpost so Timestamps will not be in Sync with the “Incident Timeline” but show the points they are created for.
I grepped this file for “Chrome/47” to get an Idea how, when and where the Strange old Chrome User-Agent was beeing used:
This was weird – Internal and External Traffic!
Also you can see specific beaconing patterns – short requests with hours of delay in between.
I was again freaked out by Google Hits to the external IP Adress and the Random GET URI Parameter!
If you google for the HTTP GET URI you will also find some Inconclusive Malware Analysis matching at least parts of the string….
Needing a Break to think
This is when I against all Forensic Best-Practices took offline the Server and Checked it offline with an Anti Virus scanner via c’t desinfect 2019.
I am in no way saying that AV is the solution for this nor did I have any hope that AV would give me a positive signal that the system was not compromised. However as I needed a break I decided that AV could at least give me a signal if the system got compromised 3 ways sideways and filled with bad stuff.
So AV will not give me piece of mind, but if it finds stuff i know I have a problem.
And I did not necessarily needed the break because I was exhausted but because I had registered for a Blackhills Infosec Webcast about Sysmon and Applocker that was beginning to start and could be watched by an Hour of Crosstrainer in my Gym!
You know there is even a name for the phenomena when you see stuff exactly in the right moment. Like when you buy a new car and start seeing them on the road everywhere all of a sudden.
The AV came back clean and more Logging was required – as well as a good nights sleep!
The Webcast about Sysmon and Applocker was very informative and Syslog was exactly the answer I needed! I was thinking about another Tool from the Sysinternals Suite – tcpview – however I know how big these logs become and Syslog seems pretty awesome and something I have to deploy on all my Windows Machines anyhow soon!
As stated in the Blackhills Infosec webcast you can directly start with a proper XML from @SwiftOnSecurity which you can get in his Github.
So I set my trap – if I was compromised for years I could probably sleep another night over it – and Installed Sysmon and tailed my NNM realtime log.
Not everything goes according to plan!
This morning (August 16th) I woke up and ofcourse again there was an Old Chrome beaconing out (which was already shown above):
So I checked the Sysmon log and was greeted with no matching log:
Little sidestory: When I set up Sysmon I verified that date/time stamps where close enough to each other on the NNM host and the Windows Server to be able to correlate Events properly based on Time.
Ofcourse I found out that the ESXi in my Lab was not able to ntp out and time was off for 15 Minutes…
Let that be a lesson to never skip on proper basic Setup in productive environments – it will always bite you in the ass in case of an incident!
Luckily this was only my lab which I don’t even run continuously so I have somewhat of a lame excuse and I promise that I always check the basics in productive Setups that I perform!
All of this however doesn’t change the fact that I don’t have a matching event in Sysmon… I also searched for the destination IP but got nothing….
Finally the Conclusion
What got me to the conclusion was that the beacon was internal and the destination IP of the becaon (and a couple of inbetween) was my LG Smart TV which is ofcourse joined to my Network to Stream Netflix!
Thats when i remembered that Windows Home Server does have some UPNP Streaming stuff going on and also that I have Plex Server running on the Windows System and a Plex App on the TV.
After getting no Google Results to Windows HomeServer 2011 and the User-Agent in question I stumbled over Plex in some Google results.
Low and behold – The Villain:
Not Everything is resolved but I dont think I got hacked anymore….
So I should still get to the bottom of why the fuck Plex is beaconing to the Internet and my Smart TV with 4 year old Chrome User-Agent strings – probably because they are embedding horrible old code for some reason – but I am now certain that I was not hacked….
At least …. 99% …. somewhat…. lets not talk about paranoia….
Since then Paypal had a lot hits and misses with 2FA as you can find in countless blogposts out there.
I cannot tell you when exactly but at some point in the last 2 years PayPal managed to implement support for proper 2FA OTP Apps like Google Authenticator, Authy, Lastpass Authenticator, YubiKey OTP to name only a few!
You can set this up by logging into the PayPal website and Navigating to the Security Settings:
It is now finally also possible to remove SMS-2FA entirely which is a good idea when securing your money!:
If your Mobilephone number is still listed there add a “Third-party code generator App” switch it to your primary device and remove the mobile number!
Im always of the mindset that SMS-2FA is better than no 2FA at all, but its not state of the art and has proven easily breakable by sim-swapping!
No U2F – Will PayPal ever Support it?
So before we preaise PayPal that they managed to implement TOTP properly in their website (btw, they don’t offer recovery codes when setting up 2FA….) lets note that it is 2019 and U2F and Cheap Tokens like Yubikeys and even Cheaper U2F Only Tokens are now Available and will prevent phishing of your second factor!
Read up on how U2F will prevent a MITM Website to steal your 2nd Factor on Wikipedia!
So definitely switch over your PayPal Account to an OTP App like Authy and deactivate SMS-2FA but beware that you still have to be carefull that you dont enter your Login-Credentials + 2FA Code into a Phishing Site!
If you want to deploy Nessus Agents in an OnPremise Nessus Manager Setup you have to make sure Nessus Manager has a Certificate which is trusted by the Clients OS and that Nessus Manager trusts the Clients Computer certificates.
With the default self-signed Certificate Linking of Agents will not work. You might have found this out during Agent Linking and seen some kind of ssl error like this:
[07/Apr/2017:11:57:47 +0100] [error] [msscan] Connection to manager for 'jobs?distro=es6-x86-64&platform=LINUX&sleep_time=10&ui_version=6.9.3' failed with code 0 [Connection to shared-nessusmanager:7021 failed with an ssl error] -- Last connection was Mon, 20 Feb 2017 18:51:18 GMT
Nearly every environment i encounter has similar parameters:
A Windows Domain
Mainly Windows Clients and Servers
A Windows Certificate Authority (CA)
Nessus Manager Running on Linux or Tenable Virtual Appliance which is Based on Linux
If you want to deploy Agents in a similar environment and are not Certificate Savy the following guide to deploy a Trusted Certificate on your Nessus Manager might help you!
Create a PrivateKey and CSR for your Nessus Manager
First we need a PrivateKey for the Webserver. If you do not know why read up on PKI!
The easiest way to do this, especially if you want to chose a custom Hostname for the certificate is with openssl on a Linux or macOS host! It will be possible on a Windows host as well (google it) and there are also Websites out there that provide this as a service but you should always think about whom you trust to share your private keys with!
openssl genrsa -out hostname.key 2048
Note: the Name of the output file is arbitrary! But it makes sense to call it like the hostname to not mix stuff up!
Important: Do net send out this private Key ever! Just like a private key for SSH a private key for a Webserver Certificate should never be made public!
Important2: RSA is coming under criticism lately – there are newer and better encryption systems than RSA. This discussion is not part of this blogpost but feel free to reasearch alternatives to RSA private Keys! Also you have to decide if 2048 bits are Strong enough for your environment. Best you read up on your companies Security Guidelines regarding Certificates!
Now that you have a private key you can create the CSR (Certificate Signing Request) that you can then send on to the CA Administrator of the Windows Domain:
You will be asked a couple of questions on what Data should be present in the Certificate with the most important one being the Hostname.
Note: This will create a CSR without an AltSubjectName which is nowadays required by Chrome. So if you want a Certificate that is trusted in Google Chrome (the Nessus Agents wont care!) you have to awkwardly pass on the SubjectAlternateName via a config file as for example described here.
Again: Never send out the Private Key alongside the CSR! Only the CSR itself!
If you ever want to check the content of a CSR, for example to check if the SubjectAlternateName was included properly you can use the following command:
openssl req -text -noout -verify -in hostname.csr
Request a Signed Certificate Chain from the Windows Domain CA Administrator
A lot of companies are running Windows CA’s. Send the generated .csr (not the key!) to the CA Admin and request specifically that they create:
A p7b Certificate Chain in base64 Encoded format!
The default under Windows is often Binary encoded which will not be compatible with the next steps. In the end if you are an OpenSSL/Certificate expert you can convert nearly any format to any other format, however if you are an expert you probably would not read this article, so make sure you get a base64 encoded p7b Chain!
Convert the p7b Chain in individual separate Certificates
The p7b file will contain the entire chain in a Single file which will look / start like this:
Split this Textfile in 3 separate textfiles each time after —-END CERTIFICATE—–. The Subject and Issuer lines above the Certificate and the Commands (Begin/End) can stay in the files to make it easier to identify the content of the file for a human.
Save the three textfiles as:
hostname.cer (the server certificate)
subca.cer (if a subca was present)
rootca.cer (always has to be present – it has signed the cert!)
Note that you actually have a fourth file: hostname.key which is the matching private key for the server certificate.
Install the Certificate
Now that you have all the required files you can go ahead and install the Server Certificate:
Navigate to Applications -> Nessus -> and Scroll Down to the Certificate Settings:
Now provide the four files (Intermediate Certificates = SubCA Certificates).
Note: If there is more than one SubCA/IntermediateCA dont split the SubCA Chain, leave all SubCA certificates in one file to be uploaded here!
After providing the four files and clicking Install Server Certificates you will see a green Success Message at the top of the screen, and this Dialoge will show the newly installed certificates information.
Important: Note down the “Not Valid After” Date somewhere, best you set a reminder in you calendar, als your entire Agent Setup will stop to work if you let the Certificate expire! Best will be you keep the four files save and 4 weeks before the expiration date you create a new Set of files / server certificate and replace the old one. If you make a mistake you can always quickly roll back to the old files and troubleshoot what you did wrong.
You still have to do one last step! The Nessus Manager will also verify the Clients Computer Certificate which is also signed by the Windows CA. However you have to specify the RootCA as trusted for this separately in the lower section of the above already shown dialogue:
Here you only have to provide the RootCA Certificate which is the same one from the four files you created above!
Note: This is basically just the public Certificate of the RootCA which can be found publicly everywhere on the Domain! It will help Nessus Manager to validate the trust for the Clients Computer Certificate.
Important: Make sure to notice that the Core Appliance also keeps two different sets of Certificates (one for the Core Appliances Webinterface on Port 8000 and one for Nessus Manager on Port 8834). Make sure to upload it at least to the Nessus Manager Configuration but feel free to also deploy it to the Core Appliance Webinterface.
If this helped at least one person struggling out there im happy! Feel free to ask questions in the comments below if you encounter an issue with this guide and I will be happy to advance it where necessary!
Linux (you probably already have everything you need installed already!)
macOS (you’ve got a Terminal but all programs are old….)
So if you are using a macOS you kinda have a Terminal running Bash (for now) by default but all programs are horribly old and wont get you far!
There are gazillions of different paths you can take to get decent python and pip Versions and Updated Versions of other tools running!
I will describe just one of them: the BREW way. Taken from the Documentation:
Homebrew is the easiest and most flexible way to install the UNIX tools Apple didn’t include with macOS.
The good thing: Homebrew will just conveniently install up to date UNIX tools beside the default apple built in tools without touching or removing them! So you can keep the default BASH and python from macOS but also install recent versions beside them in your profile and use them whenever you feel like it!
If you are familiar with brew chances are you already have it installed on your macOS! If not don not just blindly pull install scripts from the Internet and hope they wont own you!
Read up a bit! Ask Professionals with Macs if they use brew and if the trust it! Do some research about what you are going to do!
If you can and have the time you might even want to dig through the brew Installer Script before installing it!
If you are paranoid: download the brew installer script manually from the github repository before feeding it to ruby! Make sure its the same as in github before Installing it to verify it has not been tempered with during download!
After you have installed homebrew you will use two commands often:
…to update your installed brew packages…
brew install PACKAGE
…to install a new UNIX tool.
Note: Homebrew will not replace the default macOS programs in your path! So if you for example installed Bash v4.x via Homebrew macOS will not launch Bash v4.x when you launch a terminal!
But you can always spawn a Bash v4.x instance by calling it from its installation directory:
Install bash & python & pip & pyTenable
First of I would recommend to install a recent Version of Bash to be able to run all bash scripts you encounter (there are scripts that will not work with macOS old Version of bash!
brew install bash
Note: Remember as stated above that you have to drop into this new Version of Bash 4.x by calling it directly:
Install Python, pip & pyTenable API Wrapper
When you have successfully installed Homebrew and played around with updating it and all installed packages you can continue installing python and the pyTenable API Wrapper:
brew install python
will install python (Version 2 and 3 including pip for both versions).
pip3 install pyTenable
will install the pyTenable API Wrapper.
Note: As there is no pip3 command on macOS by default the pip3 installer will automatically be in your path and no dedicated calling form /usr/local/bin is required.
You are now able to use recent versions of Bash4, python2, python3, pip2, pip3 and pyTenable!