David Dixon

Interests

DevOps Network Engineering Process Automation System design & integration Information networks Scripting Cloud Computing Microservices

Skills

  • AWS, Microsoft365/Azure
  • Linux, Windows, Unix
  • Apache, IIS, NGINX, Tomcat
  • Wireshark, TCPDUMP, ClearSight, The Dude
  • TCP/IP, iPerf, jPerf, Cisco, Mikrotik, Ubiquiti
  • MySQL, PostGres, MSSQL, MongoDB
  • Symantec Backup Exec, URBackup
  • Active Directory, OpenLDAP, SimpleSAML
  • HyperV, ESXI, VBOX, QEMU, Docker
  • Prometheus, Grafana
  • XCAT, SLURM, HPC
  • SharePoint, Wordpress, Drupal, DotNetNuke
  • Python, ASP.NET, Java, HTML, CSS, Javascript
  • BASH, Powershell, Ansible, Jinja2
  • YAML, JSON
  • Bitbucket, JIRA, Gitlab, Jenkins, Artifactory, Confluence, Github

Education

Associate of Applied Science Computer Information Systems

Ivy Tech
Bloomington, USA
May 2009

Certificate in Network Administration

Ivy Tech
Bloomington, USA
May 2009

Experience

Warrant Technologies

System Engineer / CIO
October 2017 - July 2024


I worked either independently or within a team across a variety of projects. Expert in administrative, technical, and business development functions. This ranges from technical design, process implementation, personnel & project management, acquisition strategy, and proposal management. Some of my projects and roles included:
    Universal Command and Control/UC2 (JXWN) – As a technical manager I lead an embedded team of engineers. Our team of engineers help the customer design and deploy their technical needs in a complex expeditionary system. Personal duties and accomplishments include:

  • Build times and complexity for the ECORE C2 software platform prove to be long and difficult. This provided an opportunity to create an automation pipeline in order to save time, reduce build issues, and get build metrics. The solution involved the design and deployment of CI/CD pipeline that builds and deploys the EBRISS/ECORE command and control platform, required RPMs, and simulators. The toolsets involved are Jenkins, BASH, Bitbucket, Java, and Artifactory. The solution provided expedited build times, plus quicker regression, stress, and acceptance testing cycles on Linux(RHEL) based hardware and virtual machines. Saved time and cycles moving from manual to automated deployment using Test Resource Management Center (TRMC) cloud services and on premise mission hardware.
  • As the popularity and requirement for UC2 communication model rises, so does the need to support fielded deployments and potential system adopters. This lead to the inception of the Reference Implementation Lab (RIL). I worked with government stakeholders to understand the requirement for each system in the RIL. Requirements are translated to tickets, and in some cases larger epics, that capture tasking, time estimate, definition of done, customer, impact, and progress. Technical tasking for buildout includes, but not limited infrastructure design and deployment for physical computing components, network layer, and applications used in of network routing and switching equipment. Employed VMware ESXI based VMs and physical hardware to host lab infrastructure in conjunction with CI/CD pipelines.
  • In order to improve efficiency, support new RHEL releases, and provide a better ECORE OS, I worked on a redesign of the ISO build process. I initially worked with the government to capture requirements for the build, while removing archaic processes, and components from decades prior. New ISO build process allowed for variable definition for items such as partitioning, package injection, unattended build (YAML structured), and other government defined elements. Technical architecture used ansible, jinja2, and kickstart to automate the creation of Red Hat8 ISOs for deployment. After development was underway, transitioned this project to another team member, but participated in design reviews, product documentation, and technical assistance.
  • In many cases, the impact of the data a sensor, component, or overall system is poorly understood and leads to confusion and difficulty in later system engineering cycles. As the RIL and UC2 continued their rollout, and given the often austere network conditions for expeditionary systems, necessity dictated that object-to-wire-cost models be created. Test plans defined each test case, transport mechanism along with compression versus open XML standard. Tests were modeled using real-world scenarios that show how a “UAV Swarm” might impact the network and overall system performance. Analysis points and formulas were developed and integrated into the test plans. We utilized custom sensor simulators, network analyzers, and COTS equipment for our tests. Final evaluation and test findings were delivered to the government showing the object-wire-cost, and overall compression improvement offered by UC2.
  • As tasking and system CONOPS grew, a need for other types of tests expanded as well. This involved test design for a variety of components including long range radio, free space optics, EW Sensors, Quality of Service (QoS), and other system elements / concepts. Leveraging the other test elements, while expanding our palate provided the government with more insight into network, system, and UC2 operations. Some of the tooling employed for test automation and data gathering includes, BASH, iPerf, TCPDUMP, and Wireshark.
  • CONUS and OCONUS implementation requests of UC2 System of Components (SoC) began to ramp up. In order to ensure smooth rollouts in field our team developed a suite of tests for regression, smoke, integration, and operational scenarios. We were able to uncover a number of regression issues with vendor UC2 plugins, while documenting troubleshooting and CONOPS process for specific sensors. Additionally, this allowed the team to support both government and contractor Field Support Representatives (FSRs), when deploying the sensors. Technical network design for multiple UC2 test and deployment scenarios involved using NAT, QoS VLAN, Routing, Etherchannel (port aggregation), in support of both hardware and software routers.
  • Expeditionary hardware and software can be confusing to the untrained. I have experience with an array of expeditionary sensors and technologies ranging from electro-optic, CUAS, SATCOM, TAK, DDU, and radar. I mentored my team members on the functionality and integration of each component into the larger system. Mentoring involved the setup of physical and virtual test hardware, analysis of code/network/system operation. The generation and execution of tests in the RIL that showcased the definition of done and overall integration performance.
  • Many times in the IT and engineering field each team member can get “stovepiped” with their work. This is a common side effect from simple ticket based tasking as the core focus is get the weighted priority ticket completed. To both assist with the completion of the priority task, while giving product goal insight to each team member I conducted a meeting twice per week where each team member can ask for assistance on tickets, provide feedback on their work, and discuss upcoming milestones. This lead to better team morale, ticket completion efficiency, and overall improved integration into the government customer space.
  • Government contracting necessitates the need for contractual reporting and cost analysis. Each month I compile Monthly Status Reports (MSRs) that provided technical accomplishments, upcoming milestones, contract issues, and anticipated travel. Additionally, I performed cost tracking and interface with the contract prime to ensure funding flow and provide other insights.


  • Sense and Interdict/SENSEI (JXWR) - Performed early system engineering with the SENSEI CUAS system. The SENSEI platform’s goal is to detect UAV and provide countermeasures. This system employed electro-optic / infrared (EOIR), EW , and radar sensors coupled with the ECORE OS. Integration was performed at government location, and limited resources during the COVID-19 pandemic. Accomplishments include:

  • The government customer had a need for early integration of all sensor components, network, and OS. Using my background with electro-optic sensors, ECORE/EBRISS, and Linux, I worked with the customer to perform feasibility analysis and integration for the SENSEI shore and ship-side system design. I worked with other remote vendors and government entities to troubleshoot issues surrounding each sensor or component. I performed documentation for CONOPS, security, and testing of the system. Efforts lead way to the blocking vessel (shipside) SENSEI variant being tested.
  • As other outside contractual duties for NSWC Crane Base Wide IT mandated more of my time, a need to transition my role was required. I provided guidance to another Warrant Technologies engineer for the configuration of hardware, software, and other system components. I provided pro-bono assistance to the team as allowed.


  • NSWC Crane Base Wide IT (CODE 1041) – NSWC Crane IT Division is charged with the upkeep of Crane hosted systems and networks, mostly RDTE and SRDTE. There is a component outside of these enclaves which is the Navy Research Enterprise Network (NREN), that falls under the same physical roof. My role here was two-fold; manage the overall technical and contractual health of the contract, while performing technical Lead role and mentoring of team members. Some of these tasks include:

  • NSWC Crane has a number of engineers in each division that utilize Linux as their OS. I worked directly as code 1041 contract Linux Admin Lead. This was a multi-faceted role as I handled technical design and implementation of systems at a tier3 level. I assisted junior positions with any Microsoft Service Manager tickets they have, coordinated with other stovepiped functional groups for system rollout and issues, and performed security awareness and maintenance for systems. I mentored team members in this position, which involved review of system documentation and architectural data flow diagrams. This also involved on the job training with junior admins so they get “hands on” training and create role failovers.
  • A need arose from NSWC Crane’s Small Arms group to stand up a high performance computing cluster. At the time, I was off direct contract assisting with SENSEI effort. The Prime contractor, GDIT did not have the resources to complete the task. Pro-bono I performed feasibility analysis based on requirements and hardware limitations. Initial feasibility analysis and testing was performed with XCAT and Warewulf as the cluster manager options. I developed initial HPC proof of concept using XCAT + SLURM on CENTOS7 in off work hours. I knew that I would not be able to maintain the project given other obligations, so I used version control and documentation on Gitlab. I automated the deployment of the XCAT cluster manager and SLURM job timer using BASH. I provided team training and support during the transition so I could guide them through deployment on production systems.
  • A major time commitment for ITD is the implementation of Secure Technical Implementation Guides (STIG) and generation of XML based checklist for use with STIG viewer. Additionally, the distro offering expanded during this time to include the Debian based Ubuntu 16.04 platform. Further compounding the issue at the time, /etc/init.d process management was being replaced by system (a stumbling block for the team). In order to automate this process, I wrote an automated module for the Ubuntu 16.04 STIG. The module would check the system against each of the STIG items and allow for user input or full automated run when patching. In the final module function, an XML based checklist is generated and viewable by the DISA STIGTOOL. I mentored other interested team members and they wrote the later 18.04 module. Pushed code to NAVSEA SPORK for collaboration and consumption.
  • Another large time commitment for ITD is the secure provisioning of systems. Manual process was present in the Linux group, which involved using gold disk and later patching by individuals. We transitioned to automated system builds using Red Hat Satellite Server. I created process documentation for build lifecycle to include troubleshooting steps for deployment issues. I compiled system architecture documentation for Satellite server that corresponds to each stage of the build; This involved complete system coverage from kexec state, provisioning templates, through final deployment. Additionally, I performed step by step documentation during a system upgrade so the process can be utilized on SRDTE deployments with limited outside resources. Technical toolsets used were Satellite server, GPG, RHEL, Ubuntu ruby based provision templates, kickstart, Foreman (kexec state), and VMWare.
  • As rollout of the Ubuntu distro became more mainstream at NSWC Crane, so did the need to expand trusted package offerings to each division. To achieve this I created Crane owned mirrors of Canonical/Ubuntu and third party repositories to aid in provisioning of Ubuntu based machines. Additional engineering needs facilitated consumption of common toolsets including but not limited to pypi, docker, and FIPs. Using BASH I automated the mirroring of each trusted upstream repository to increase efficiency and security for systems residing at Crane. Pushed the automation code to NAVSEA SPORK for collaboration.
  • During the COVID-19 pandemic a need for remote work was present, but Linux based systems were severely crippled in this sense as there was no current solution for VPN. Out of necessity I developed a python3+BASH based wrapper around the OpenConnect VPN client. Using the native pkcs library on RHEL and Ubuntu machines I was able to prompt the user for a CAC, ensure PIN input, and parse the DoD certificate. The parsed certificate was passed ultimately to the VPN edge device on the RDTE network and mapped to the appropriate Active Directory user. Upon validation at the AD level a Kerberos token is handed back and a logical tunnel interface is established. Solution provided remote work capability to RDTE Linux users.
  • I performed administration of DMZ and perimeter DNS system on legacy Unix systems using BIND. As Unix is cumbersome compared to modern Linux distros I migrated to current RHEL release and performed patching, DNS blacklist updates, and SSL certificate management.
  • Managing STIG checklists and POAM data is cumbersome for ISSOs. I worked with ISSO, Tyler Forijter, to deploy the STIGMANAGER software to track associated data. I used docker + keycloak + stigmanager for idempotent deployments of software used to manage the assessment of Information Systems using security checklists published by DISA. Proof of concept was built on RDTE and later transitioned to SRDTE systems for a uniform approach.
  • In an effort to share information across divisions, I participated in the NSWC Crane Linux Community of Interest. This allowed me to gain insight into what technical challenges other engineers were facing. I shared information about ongoing ITD efforts and potential solutions to support mission-funded groups.
  • I managed the health of the contract and over the term built the team into 15 full time employees. This involved a great deal of feeding and care with minimal supervision and support. Many of my non-technical duties surrounded liaison with our prime contractor, GDIT regarding the status of each functional area and personnel. If needed, provided coaching for individuals in other roles (network, Windows, NREN, etc.). I ensured that employee needs were met by the company. Additionally, I provided solutions to emergent employee, prime, and government needs.


  • CBRNETechindex (DHS) -This was a smaller effort through Department of Homeland Security (DHS). I was responsible for this effort from RFP capture, technical, to sunset. Working with MRIGlobal we developed a C#.NET based solution that would parse smart card certificates, user-map functionality, and login capability on a DotNetNuke (DNN) managed instance.

  • I was responsible for managing and contributing to a small technical team through the development of ASP.NET plugin that allows seamless login and registration with a CAC or PIV card. I managed contract deliverables, standup of AWS based development environment, contributed to code, test validation, and interfaces with customer. I worked with the customer to resolve issues that they encounter on their production systems pre and post code deployment.


  • Lexington-Fayette Urban County Government/LFUCG (Private) –This was a small effort that provided aid the homeless population around Lexington, Kentucky. Our role was data normalization and visualization.

  • LFUCG gained much of their funding through grant writing with HUD. A larger part of grant writing and any root cause analysis involves understanding your data. As the team lead I worked with the LFUCG customer and technical team to develop data normalization routines and applications. Using an AWS development environment, performed sandboxed design, data normalization with copy data, and gathered customer feedback. Once approved, a Tableau visualization was attached to the normalized dataset and consumed by the LFUCG customer.


  • GrowthOps (Private) -– I performed a number of different roles on this contract ranging from datacenter engineering (Physical – Application layer), Security Penetration Tester for systems to ensure PCI and industry security baselines are followed. I acted as contractual liaison to grow business opportunity in new market areas.

  • This small effort leveraged my past experience with Kali Linux, NIST based security models, and engineering background to perform security testing and gap analysis for a larger PCI audit at a Utah based datacenter and call-center. I developed and executed a pentest in the series of a week that categorized physical and digital security. Later findings were used by the client to showcase vulnerabilities in specific system architecture. Technical toolsets used included Kali Linux, wafw00f, nmap, Metasploit, hping3, dmitry, nikto, wireshark, nbtscan, openvas, nnmsenum, macchanger, airmon, and other Kali suite tools.


  • RDCMS (NSTC Great Lakes) –This was Warrant Technologies first prime (1 year FFP) contract. I performed a variety of roles including system analysis and design of system architecture, installation, configuration, testing and maintenance of infrastructure components. Project utilizes Angular development with offline service worker to ensure Navy Recruit Hard Card and other training information is delivered to Navy CeTARS database.

  • NSTC Great Lakes is responsible for the recruit training process. Their system for data tracking at the time was using a manual “Hard Card” documentation approach. Data was tracked on written card and manually uploaded to the final CeTARS database. Our team proposed a change to this methodology that leant itself to digital means whereby the data is input into a “Hard Card” representative mobile interface and uploaded to the CeTARS database. Throughout the contract I assisted with both project management, software delivery, systems design, CeTARS database integration, test, and application development STIG / code security analysis.
  • I participated in design of Recruit Division Commander Management Suite (RDCMS) by tracing code functionality to requirements. I lead team in weekly product demos with customer, while tracking adherence to Integrated Master Schedule and overall requirements matrix. I developed System Security Plan (SSP) and other Risk Management Framework (RMF) deliverables in support of obtaining an Authorization to Operate (ATO). Overall process improvement using the RDCMS application was valued at over 3000 man hours per year.


  • ProCDRL (Internal) - A nodejs, mongo, and angular software project to automate tracking of CLIN/SLIN line items while generating a MILSTD compliant report. Acting as product owner and DevOps for each sprint cycle, helped organize work tickets, run sprints, architect framework, help develop requirements, and provide guidance as needed.

  • The concept for generating a MILSTD funding and technical report gave way to this company internal effort. I lead a small development team through multiple development sprint cycles, providing product demos to internal shareholders. Additionally, I worked as the principal DevOps engineer for the implementation and monitoring of the code continuous integration / continuous delivery strategy. I was responsible for architecting the delivery pipeline and AWS environments that would be used for our core platform and authentication platform. Tools leveraged include Mongodb Atlas, AWS EC2, Gitlab, BASH, Nodejs/NPM, NVM, AWS RDS, AWS S3, AWS Route53, and AWS Cloudwatch.
  • Another role given to me was product owner. With this role I did my best to provide guidance to the team in design sessions, test events, and through the use of data models and use cases built with Enterprise Architect software. While this effort succeeded in building a report using webform input, it ultimately failed due to shifting requirements, lack of resources, and larger impacts on the data models used to generate the report.


  • Other roles - In order to help others with their work, I often had to perform a variety of other non-technical duties. Many of these were administrative, business development, or HR related; effectively Operations Manager.

  • As exploration outside of Seaport based acquisition, I developed strategy and relationship for Defense Technical Information Center (DTIC) contract acquisition. I responded to RFP with technical proposal and cost proposal. I performed task execution and overview after contract award. Additionally, I mentored engineering team unfamiliar with expeditionary systems development. Overall time to acquisition on DTIC was cut into days versus months. I have also assisted with the acquisition of GSA 00CORP and OASIS contracts.
  • I have performed some HR functions including job description creation, analysis of exempt / non-exempt employees, and rate modeling for Labor Categories.
  • I helped with the award of Warrant Technologies CMMI (Capability Maturity Model Integration) Software certification. I contributed input to team based on process area needs. Helps perform audits, data artifact captures, and guidance to the rest of team. I have collaborated with company leaders on development of policy in order to assure CMMI intent falls within company capability. I helped implement policy and toolsets across software and company projects. Many of these rollouts include the addition of processes such as ticketing, code style guides, repository usage, Git procedures, and DevOps standards into a common developer user guide.

Image Matters, LLC

DEVOPS Engineer
2015 - 10/2017
Worked on Geoplatform; a federal cross-agency collaborative effort focused on geospatial, community, and data transparency for the Department of Interior. Performed AWS Cloud management providing complete life-cycle administration of multiple environments using continuous code delivery for increased uptime utilizing some of the following techniques for LAMP and MEAN stack applications which encompasses the following:
  • Initially, product rollouts were cumbersome and performed manually at DOI datacenters. A migration plan to move the product to AWS was drafted, proposed, and executed. I was responsible for technical rollout, CI/CD development, patching, and maintenance. o I provided programming for multi-stage code delivery on Linux systems. Responsible for over 100 EC2 instances, associated load balancers, S3 datastores, cloudwatch metrics, and lambda functions for a development, staging, and production environment. My responsibilities include code deployment and automation of Geoplatform NodeJS and LAMP applications using automation techniques written in BASH, Python, and YML, while leveraging AWS CodeDeploy with EC2 and ELB. We staged our deployment using a blue/green technique to validate service availability prior to deploying. CI/CD functions performed encapsulates the application lifecycle from development through parts of the maintenance cycle. (backup, file sync, patching) End result was overall time + cost savings, automated development builds, push button production deployments, and operation with a smaller team size.
  • I provided AWS administration that provides, but not limited to ensuring Elastic Load Balancer availability. Upkeep of Amazon Linux EC2-Instances and associated AMIs and Snapshots, Relational Database Services configuration and upkeep, S3 configurations to provide data availability for automated deployments and maintenance cycles. Used billing dashboard for cost management of multiple AWS accounts. I also used AWS Lambda to script power off/on for development environments to reduce costs during non-peak hours.
  • Performed website maintenance for company and external sites front-end, using HTML and CSS standards. Back-end maintenance of databases, site data files, and operating system updates. Provided scanning, analysis, and remediation of GeoPlatform.gov information systems. Our identity provider for GeoPlatform used SAML, and I provided administration and break fix of this service as needed. I provided MySQL database administration which includes backup, restoration, patching, and other maintenance. I became very familiar with the operation of Drupal and Wordpress, which I then incorporated into our CI/CD pipeline. For other company projects I provided guidance, management, and development of multiple cloud environments.
  • Developed a proxy service to allow users to stream on-demand audio from a music streaming service on a mobile web app.
  • Built interactive audio and visual feedback using the HTML5 web audio api and SVG.

SAIC

Network Support Lead
2014 - 2015
Supporting the (GBOSS) Ground Based Operational Surveillance System, I had overall responsibility for all network configuration, testing, and documentation.
  • At this point in the SDLC, GBOSS was transitioning out of engineering focus and to more of a maintenance stage. In order to provide quality assurance I perform systems analysis and testing of all network solutions and configuration enhancements. This ensures that the system falls within acceptable performance baselines. Additionally, I developed and coordinated software regression test plans for execution after each two week sprint cycle. Provided quality assurance and other technical support before equipment is emplaced in theatre environments. This involved direct support to “high-bay / FSR” personnel at the SAIC facility. I acted as a liaison between these staging personnel and engineering.
  • Supported field testing using a Linux base Enhanced Battlespace Reconnaissance Intelligence and Surveillance (EBRISS) software to manage sensors during field testing. I worked with G-BOSS software, network, and information assurance team to develop mission critical solutions that fit needs of each functional area.

Camber

Network Engineer
2012 - 2014
Novonics was purchased by Camber. Also, this is during early engineering lifecycle for the expeditionary system. I provided a wide range of system analysis, requirements gathering, and engineering services to the Ground Based Operational Surveillance System (GBOSS). This included:
  • Predecessor systems had relied solely on unicast video deliver for distributed nodes. A requirement to deliver video using multicast subscription was levied. Our team redesigned existing unicast network infrastructure to allow client endpoints subscriptions to real-time Protocol Independent Multicast video streams, while providing the ability to transport other mission critical protocols and data.
  • Other requirements emerged to provide voice to each interconnected operator post. I integrated Cisco based Voice over IP (VoIP) solution into the network and OS using Call Manager Express in conjunction with Skinny Client Control Protocol (SCCP) and Session Initiation Protocol (SIP) for hardware and software based telephony services. Other network roles involved writing and maintaining network centric documentation plus procedures related to requirements, design, implementation, testing, versioning, and specifications.
  • I worked with software, manufacturing, and production teams to ensure correct network operational and installation procedures. Provided troubleshooting of system elements when necessary. Since network deployments were cumbersome and error prone assisted with development of BASH and Python scripts, helped develop automation procedures for configuration of GBOSS network devices over serial communications using pyserial library.
  • In order to test operational readiness I configured Enhanced Battlespace Reconnaissance Intelligence and Surveillance EBRISS systems for field testing exercises. This allowed for a pass/fail of system configuration prior to reporting to the program office. Ultimately, the system was fielded and deployed to multiple OCONUS sites.

Novonics

Systems Analyst
2008 - 2012
With Novonics, I provided solutions and support in a multitude of internal and contract based projects including the following:
  • Supported the Battle Force Tactical Trainer (BFTT) to provide technical support for software integration, updating documentation, and providing technical support in accordance with engineering guidelines. Much of the work revolved around gold imaging systems initially, but later transitioned to network support and encryptor firmware update cycles.
  • During the building of LHA and LPD hull types, the US Navy requested a dedicated portal to track installation control documents, engineering specs, customer requirements, meeting minutes, and other data required for the design and build process. This necessitated the Amphibious Ships Web Application Suite (AWAS) which leveraged Microsoft Office SharePoint Suite 2007 as the core collaboration tool. I performed design requirements analysis, workflow design and implementation, and maintenance lifecycle of the platform. The AWAS suite was used throughout the shipbuilding process and sunset after the launch of the initial LHA and LPD hulls.
  • IT systems efficiency for maintenance and accountability was suffering due to lack of consolidated approach. With a company of around 200 employees, each office had been responsible for their own IT system lifecycle. Under minimal supervision coordinated new office IT rollouts which included, asset purchasing, infrastructure installation, user support, and configuration documentation. I migrated critical company Deltek, GCS, and ADP payroll data from legacy hardware and software environments into updated operating environment while minimizing operational downtime and risk. Planning, implementation, and maintenance of payroll warm-site environment, which included automated backups, operational readiness testing, user support. I assisted with network design and implement secure company intranet environment utilizing IPSEC tunnel mode between multi-office locations, while segregating different network traffic types via VLAN implementation, and providing VPN capabilities for off-site users. Additionally, I was responsible for the administration and maintenance of company critical data environments including but not limited to: WSS 3.0 SharePoint, mixed mode Active Directory multi-site locations, Hyper-V administration, and Backup Exec. I provided real time network monitoring of interconnected site locations for a central network operations center using cost effective hardware and software solutions.

Public Stuff / Projects

Licenses / Certifications / Awards

Cisco

CCNA
2024 - 2027
Industry networking certification covering TCP/IP, Subnetting, Routing, Switching, and other core topics.

COMPTIA

SECURITY+
2024 - 2027
Industry security certification covering a broad range of security related topics.

Clearance

TS
DoD issued clearance

Amatuer Radio Operator

KB9HYS
2024 - 2027
RF enthusiast.

Eagle Scout

An honor to serve community.