One by one:
A New Project: fmWATCH
Hi All,
The scripting/coding bug has got ahold of me over the last couple years, creating scripts and tools to improve functions for my work and my clients.
Over the last couple months, I started to put by big boy pants on and try doing things a bit more proper and start using GitHub to be track and publish my work.
I have a few in the pipeline at the moment, but certainly the 1st published and most polished is this: fmWATCH
fmWATCH is scripting for monitoring and resolving false mount points
A false mount “Watchdog”
Currently, it targets and addresses the empty mount points created in /Volumes by a bug in macOS 10.12 Sierra and above. When a network drive is already mounted, further attempts to mount via Finder’s Go > Connect To Server or persistent scripting causes the creation of the empty ditectories
To use/test, install the latest release at https://github.com/aarondavidpolley/fmWATCH/releases
Use at your own risk.
Note: the core script uses a non-destructive rmdir command that only removes empty directories in /Volumes, rather than an all destructive rm -rf style.
This is available under the MIT License: https://github.com/aarondavidpolley/fmWATCH/blob/master/LICENSE
Happy Testing!
To Upgrade or Not… macOS
I have heard a common saying in the IT industry around updates to software in general:
“if it ain’t broke, don’t fix it”
Heck, I have even said it myself 🙂
To those saying if it ain’t broke…. Remember we are in an age where being up to date for SECURITY patches etc can be the difference of being part of thousands effected by a harmful threat like Wanna Cry or not.
Apple has traditionally only provided updates, especially for security, for the latest and previous 2 macOS versions. More recently, I have seen this change to the latest and only 1 previous for some updates.
If you are running anything older than El Capitan (macOS 10.11), it’s too old and vulnerable. With High Sierra (10.13) out, you should be planning and testing to have Sierra (10.12) rolled out in the next few months.
On the flip side, if you swing on the other side of the pendulum and always want to be on the latest version you have to remember there are ALWAYS bugs and incompatibilities to deal with.
In the IT consulting company I work for we have already had a few issues with people running macOS 10.13 in our client base. [Currently] It’s .0 software, treat it with that perspective and respect.
The process for anyone considering upgrades should always be:
- Test first in Lab Environment (the sacrificial iMac in the corner as someone said recently)
- Then pilot a small group of machines
- Then eventually roll out to everyone (which I usually do about .3 of a macOS release cycle; usually when the known bugs are sorted by)
Hope this process thinking helps someone to avoid this awful technology disasters none of us want to see in our lifetime 🙂
[Published March 2016 – Updated December 2019]
Hi All,
This has been a long time coming, but I finally organised my church patches for MainStage and they are ready to share :).
Patches Download Link: https://aarondavidpolley.com/share/Aaron_Patches_2019-12-30.zip
Concert Download Link: https://aarondavidpolley.com/share/Aaron_Church_MS3_Concert.zip
Plugins/Software Used
There is array of different audio plugins & sound sources used in my patches. My older work is primarily just Logic/GarageBand/Mainstage sounds that Apple provides for free (See This Link). My newer stuff ranges in plugins used in different “seasons” where I favoured certain plugins/sounds for a time. That said, here is an non exhaustive list of what you will need to make use of some of my sounds:
- Spectrasonics Omnisphere (1, not version 2): https://www.spectrasonics.net/products/omnisphere/index.php (A LOT from here)
- LennarDigital Sylenth1: https://www.lennardigital.com/sylenth1/
- TAL Software TAL-U-NO-LX: https://tal-software.com/products/tal-u-no-lx (majority of sounds I use for songs from Hillsong United’s Wonder album are from here; credit to Ben Tennikoff for sharing patches with me which I modified to suit my setup).
- Native Instruments The Giant: https://www.native-instruments.com/en/products/komplete/keys/the-giant/ (my primary piano for the last few years)
- Native Instruments Scarbee Mark 1 Rhodes: https://www.native-instruments.com/en/products/komplete/keys/scarbee-mark-1/
- Waves SSL E-Channel (Frequency Compressor): http://www.waves.com/plugins/ssl-e-channel
- Waves H-Delay Hybrid Delay: http://www.waves.com/plugins/h-delay-hybrid-delay
- Voxengo Elephant (Limiter/Compressor): http://www.voxengo.com/product/elephant/
- Valhalla Shimmer (Reverb): https://valhalladsp.com/shop/reverb/valhalla-shimmer/ (I use this on EVERYTHING)
How To Use
Download and unzip my folder of patches and place it into your Music > Patches folder on your Mac. The patches will then be available in your patches library; see: https://support.apple.com/kb/PH13537?locale=en_US
Just download and open the concert to use it 🙂
Please download, try, share, and give me any feedback or questions 🙂
There is a great wave in the MacAdmin community around imaging computers called Modular Imaging.
I am by NO MEANS early to this party; these Mac deployment techniques have been around for quite a while. I am just hoping to bring some light on the subject for those who haven’t quite grasped it yet.
Also known as Thin Imaging (depending on your approach/technique they can be slightly different) the basic concept of Modular Imaging is that you are applying the preferences, apps, and other elements of a customised configuration to a machine in layers. As elements of this configuration change, its very easy to update components as opposed to building a whole new customised setup.
Hold On – Whats Imaging!?
So if you were like me a few years ago, you were a happy go lucky consultant who sold Macs to businesses and home users. You were sometimes lucky enough to get paid to go visit these poor souls and be an “IT Guy” for a couple hours while you used your above average awareness of technology to bedazzle them into making their Mac do what they wanted. If they had 3 or 4 Macs to setup, then you went through a process of turning it on, creating a new user from scratch, installing some software from a CD/DVD, setting some preferences, fought with a piece of software that didn’t want to play nice and (if you were smart) wrote it all down so you could do the steps again verbatim 3 more times over…
That was great… those were good days… You billed a lot of time for the work and the customer was happy everything was handled with a fine tooth comb and seemed to work well. The issue though is as they got more machines, and the company grew, their experience was a little different on each machine, they started having different Mac OS X versions and things felt a little “inconsistent”. You also got a little tired of doing the same process on 5 or 10 different machines… if only there was a way to automate it…
Building An Image
See, if you were a crafty little Mac Consultant, you knew about cloning and how a Mac is well able to clone hard drive contents and operating systems from one volume/computer to another. If you replaced the hard drive in a White 2008 MacBook from a 120GB 5400rpm drive to 500GB 7200rpm drive you could just restore/clone the contents from one to the other over FireWire 400 and in a couple hours the computer would “Just Work” like it did before except quicker and with more hard drive space! No re-install or re-setup of an operating system required! Yay!
The concept of imaging built on this principle but realised that if I bake in a core set of users, settings, and apps into a Mac I am setting up, I can save this “Macintosh HD” volume with all of the included hard work to a Disk Image (DMG) file and then restore/copy it on to other Macs so that they all have the same content but I only have to do the “hard” work ONCE. I could then run around with a bootable hard drive containing the DMG and restore it on to as many Macs as I wanted; brilliant!
The Next Step: NetBoot
Apple invented a wonderful technology (in 1999 according to Wikipedia) called NetBoot. This allows a Mac to startup from a system that is not physically living on its own hardware. In otherwise, a bootable system living somewhere on the local network can be used to start up the Mac instead of using the operating system on its internal hard drive.
A variation of NetBoot called NetInstall was also created which means a remote startup system can be used not only to run the local Mac but also install a new/different system on to it. This means you can have a specific Mac operating system installer prepared on a Mac server and start a new mac up using NetBoot to install it. Brilliant!
Using another variation of NetBoot, called NetRestore you can take a prebuilt Mac disk image (like we made in the section above) and put it on a Mac server restore it on to our local Mac over the network. This technology opened A LOT of doors in places like school computer labs as you could have a couple dozen Macs all with the same prebuilt Mac OS configuration in a fairly easy manner. It also meant that if a kid (or employee of a company) really screwed up a computer system, no worries! Just start the computer up form the NetRestore system and after a few minutes, voila! All back to normal. Adding more Macs into the lab? No worries as NetRestore was to the rescue…
And It Was Born: Monolithic Imaging
So, if Modular Imaging is the new cool way, what is the old way called? Monolithic or Thick Imaging
Opposite of Modular = Monolithic
Opposite of Thin = Thick
Ok, now that we have that covered, let me explain the photo above…
Monolithic imaging is the process of creating an SOE (Standard Operating Environment) by baking in a bunch of core users, settings, and apps (sound familiar?) into a Disk Image (also known just as “image”) and deploying it to a bunch of computers. The issue with this process is you wind up with this bloated image file that you need to restore on to erased computers every time you want to use it and its NOT very agile or flexible to make changes (now the photo starts to make sense…).
In a Monolithic imaging process you need to make a new copy of the image every time you make a change to preferences, update software, etc. As an example, if you have a full Adobe Master Collection installed in an SOE image for a school lab you could be dealing with a 80GB or 90Gb file that needs to be created and compressed every time you make changes; that is a huge amount of pain and time just in creating image files. If you get a new set of Macs for a school that need to run the latest macOS but your SOE has been built for an older version it takes a lot of work to get it ready for the new machines. Again, not an ideal scenario and because your SOE in a lot of cases wasn’t freshly made. It was often upgraded and adapted a few times to get it to the point it was needed making it a bit of a Frankenstein.
Profiles and Packages
So a time came for me in my journey to MacAdmin-dom (MacAdminhood?) when I was baking in some clever tools into my grand Monolithic/SOE images and deployment technics. I was using tools like DeployStudio (we’ll get to that) to deploy my images over the network to Macs with various packages rolled in on the fly to update/adjust my pre-made images; this made sure I didn’t have to re-create my image every time I need to make changes, just for some things.
I figured out ways to add preference files and other content to User Folder templates so that when network directory users logged into a Mac for the first time they had settings already set for their 1st launch of Microsoft word. I used configuration profiles made by Apple’s Profile Manager to set dock items, energy saver preferences and other items so I wouldn’t need to bake them into my SOE… things were looking up with a world of intelligent automation at my fingertips. I was using layering technics to add on top of a pre-baked image to alter its behaviour on the fly…
AutoDMG: No More Waiting For Installers
GAME CHANGER! Ok this is where things really got interesting. Here is a tool developed by Swedish mastermind Per Olofsson which could take a macOS Installer App and create a fresh DMG file of that OS, PLUS, include some of your favourite Apps/Packages to create an SOE; how cool!
All we had to do was drag the installer App on to a screen, drag a few packages and voila! We had an image with pre-baked stuff we could deploy. Gone were the days turning on a machine, creating a user, installing some stuff, installing some more stuff, waiting for that to finish to install some more stuff…..
We could use AutoDMG to create an image and either deploy it using a NetBoot style function or stick it on a hard drive and run around doing a Apple Disk Utility restore or Carbon Copy Cloner clones….
And Then It Hit Me…
If I use AutoDMG to make completely blank macOS images and then roll in everything else dynamically as their own “layers” we get a VERY flexible and easily changeable deployment system that evolves as the environment does. New Macs? No problem. New Macs that need a different operating system? No problem, insert a new AutoDMG image. Want different deployment sets for different departments that all use the same core elements but build different apps per department? No problem….
ZERO time invested into building new DMG files for deployment, rather, 100% of my time spent into building cool logic into deployment workflows with intelligent packages and scripts to customise the Mac being deployed.
I was seeing the fruits of modular elements in my packages and scripts for my SOE deployments; I was hearing the terms Thin Imaging and Modular imaging being thrown around on the web but hadn’t seen it first hand.
Once I connected the dots… my own Modular Imaging workflows were born
True Thin Imaging: Profiles & Packages
Wait, didn’t I already read this section!?
No, no you didn’t…. well sort of 🙂
So remember when I said Thin Imaging is the same as Modular Imaging? Well its ALMOST the same, except Thin Imaging is ONE STEP FURTHER.
Thin Imaging basically takes all of the workflows you developed for Modular Imaging and removes 1 element: erase the machine and restore a base image. In true thin imaging you do not need to create a fresh OS with AutoDMG; instead, you install profiles and packages on the fresh (or used) macOS that came with the computer you are “imaging”.
I like to think of true Thin Imaging more as “Provisioning” as you are applying apps and settings to whats already there rather than wiping it and giving it a fresh start. Of course, the end goal is the same: a Mac with the specific functionality the end user needs to get stuff done (and hopefully enjoy it :))
A Chicago based service call Robot Cloud is a great example of Thin Imaging in action with a tool called JAMF Pro (formerly known as Casper Suite) at its core.
With services like Apple’s DEP (Device Enrolment Program) and VPP (Volume Purchase Programme) devices can hook into a management system at point of activation and start being configured with assigned settings, apps, and even device specific licensing meaning the hands on touch required from IT is very minimal (often non-existent); this is the future of device deployment.
Modular Example
“OKAY…. I AM INSPIRED…”
But now we need examples 🙂
Here is a modular workflow example I built for a customer using DeployStudio
The Structure
The server infrastructure is 3 components:
- The DeployStudio Server software installed on a Mac Mini (running Server App) as the brains of the operation
- The DSS (DeployStudio Server) repository living on a Synology NAS (could live on same server as above)
- The netboot/netinstall services running on the Mac Mini with Server App to allow clients to option+N boot and start up from either a 10.10.5 or 10.11.6 netboot image for deployment
The Workflows
In this example there are 3 workflows developed:
- eCommerce Restore
- General Restore
- Video Edit Restore
The workflows have been setup in a modular way so that over time as the deployment requirements evolve its easy to add to the exisiting workflows. They currently represent 2 department specific workflows and 1 general workflow.
Sub Workflows
In order to achieve a simple layering structure, the workflows above actually have no smarts. Each workflow has an Alert to advise whats going to happen but then has nested workflows to do all of the work. This way if we want to update the OS version used on ALL workflows we edit it in one place, not multiple.
Each top level workflow has the same 3 core workflows and then the others have anything specific to the type of machine.
Core OS Restore does the following:
- Asks for hostname/computer name
- Asks for user accounts to create
- Erases/Partitions the boot drive
- Restores the fresh/empty macOS image
- Configures the items requested earlier (hostname, user account, etc)
Core OS Config does the following:
- Installs a predefined admin user account
- Installs monitoring software
- Installs remote support software
- Installs a configuration profile for managed settings
- Copies user login script files
- Copies custom desktop background and custom user logo
Core Apps Install does the following:
- Installs common apps and plugins like Adobe Flash, Numbers, Pages, Keynote, Firefox, Chrome, and Microsoft Office
If you need to update the version of OS used, use our trusty tool AutoDMG and just replace the OS image in the “__1 Core OS Restore” workflow which is used by all 3 public facing workflows.
If you want to update the version of flash used (with a package created by AutoPKG) then you only need to replace it into the “__3 Core Apps Install” workflow which is used by all 3 public facing workflows.
Making New Workflows
If you want to make a new workflow (i.e. for the Retail department), then you would copy with the “*eCommerce OS Restore” and the “_eCommerce Apps Install” workflows, rename them to Retail appropriate and make sure the top level references the bottom level along with the other 3 core sub workflows.
Automating Your Packages
As mentioned above you could improve on this example setup by implementing a more ongoing and automated app deployment suite like Munki, AutoPKG, and AutoPKGr. This could download regular app updates and deploy them automatically for both new and existing machines. I will write separate blog posts about these tools in the future but Munki-In-A-Box is a great way to get jump started with them.
Make This “Thin”
If you want to do all of this without restoring a fresh OS and just use what the machines already have then you’d make some minor tweaks like removing the re-partition and restore parts of the Core OS Restore workflow
Benediction
Hopefully this post has been entertaining and somewhat enlightening. If you have any questions or comments feel free to enter them into the form below.
[contact-form][contact-field label=’Name’ type=’name’ required=’1’/][contact-field label=’Email’ type=’email’ required=’1’/][contact-field label=’Website’ type=’url’/][contact-field label=’Comment’ type=’textarea’ required=’1’/][/contact-form]
macOS Sierra 10.12 Compatible Apps
Here is a great resource I have found for checking App compatibility with macOS Sierra.
Bookmark and share as desired :).
http://forums.macrumors.com/threads/macos-sierra-10-12-compatible-apps.1977335/
Hi All,
As you may have seen I recently altered my web site to be more in line with my real day to day life, not just the music side; I am now “Musician & MacAdmin”. You can see my latest MacAdmin posts on the front page as well as my Music related items and news.
I just got back from JAMF Nation User Conference (JNUC) 2016 in Minnesota and I loved it. It was a great atmosphere for learning and collaborating with our MacAdmin minds. My employer Max Computing sent me for which I am grateful.
Over the coming days I will try to post a summary of different sessions/events I enjoyed, but here is a top level summary
- The opening session(s) featuring CEO Dean Hager – he is an inspiring man and charismatic to say the least. The work he is personally doing in the social justic realm as well as help the JMAF Foundation do is remarkable and not common in corporations
- The renaming of the company from JAMF Software to “JAMF” to support product rebranding of Casper Suite and Bushel to JAMF Pro and JAMF Now respectively
- JAMF Patch Management – seeing the direction they were going and how it stacked against solutions like Munki
- Shopify’s Managing Devices in an Open Culture: great look at how their IT staff took a bunch of tech heads used to being the master of their own machine and convinced them that Mac Management was a good thing for them and the company
- The Mac@IBM presentations: truly an inspiring moment to see how they have become an Mac deployment in enterprise flagship
- Making Self Service a killer app from Paul Cowan of Waikato University in New Zealand
- User configuration framework: a great new tool developed for configuring apps and services at user login AND utilising sign on password for an SSO (single sign on) experience. https://github.com/alex030/UserConfigAgent
- Using SWIFT and the JSS API: great session as an early introduction to coding in SWIFT as well as how to do some basic functions for importing machine placeholders into the JSS (JAMF Software Server aka Casper aka JAMF Pro) for automating device enrolment
- Profiles: An IT Admins Best Friend; from the boys at dataJAR in the U.K. Hilarious and insightful it gave a great backbone understanding on managed preferences and how they have evolved plus some best practice
Overall it it was a great conference and I hope to share more soon. In the meantime you can check out the discussion links for each session and see if the slides have been posted:
Hi All,
After wresting with a few Mac OS 10.10.5 servers running Server App 5 I finally figured out how to run Kerio connect (or other Web Services apps) along side the Server App 5 services for web.
Here is a basic document outing the process.
NOTE: Kerio’s KB about this procedure is not correct for 10.10.5 and though it works for Kerio it will fill the server with error logs and break OS X Web Services.
**********************************************************************
sudo pico /Library/Server/Web/Config/Proxy/apache_serviceproxy.conf
(OR sudo -i and then the command just pico: BE CAREFUL – sudo -i gives you full root access to delete and do all sorts of nasty things)
Once in the file:
1. Press ^W (control+w) – a “find” command
2. Press ^R – (control+r) “replace” command
3. Type “*:” (without quotes) and press enter
4. Type “10.9.8.7:” (without quotes; or some other IP number that is not on your network OR is not the IP you want to bind Kerio Connect to) and press enter
5. Type “a” (to replace all)
6. Press ^X – (control+x) to exit
7. Type “y” to say yes to changes and save
What you have just done is replaced all virtual host entires with a specific IP AND Port to bind with, rather than binding to a port on ALL IPs.
The entire server needs to restart for changes to take full effect
Alter the IP address above you use in the find and replace if you have a secondary IP on the Mac in question; either virtually with a 2nd ethernet interface linked to the same physical ethernet interface, or via a secondary ethernet on a Mac Pro or via USB/Thunderbolt on other Macs. If you use another IP known to the OS you will need to alter the IP address that Kerio Connect is binding to under services in the Kerio Connect Admin web page.
As far as my testing and servers in live environments so far has shown this doesn’t fill the error logs and gives the Mac a chance to breathe.
NOTE 1:
With this method you WILL see what appears to be OS X services bound on *:80 and *:443, etc when using a command like sudo lsof -i -n -P | grep TCP. This is true inside of the overall proxy which we have bound specifically to 10.10.10.2 in this case. Kerio Connect will still happily start and run along side these OS X services. See example output of sudo lsof -i -n -P | grep TCP below (filtering anything containing 80, 443, 88, 8843, 8443, 8080 or 8800).
NOTE 2:
Leave the listen entries at the top of the file un-commented:
listen 80
listen 443
listen 8800
listen 8843
This is contrary to Kerio’s article on their KB:
Example Command: See Active TCP Web Ports
server:~ root# lsof -i -n -P | grep TCP | grep ‘:80\|:443\|:88’
kdc 102 root 6u IPv6 0x710ebe3267aea943 0t0 TCP *:88 (LISTEN)
kdc 102 root 8u IPv4 0x710ebe32690269f3 0t0 TCP *:88 (LISTEN)
mailserve 111 root 41u IPv4 0x710ebe3267aeede3 0t0 TCP *:44337 (LISTEN)
mailserve 111 root 64u IPv4 0x710ebe3279e642c3 0t0 TCP 10.10.10.3:443->10.10.10.1:58518 (ESTABLISHED)
mailserve 111 root 70u IPv4 0x710ebe3279e8cf83 0t0 TCP 10.10.10.3:443->10.10.10.1:49227 (ESTABLISHED)
mailserve 111 root 75u IPv4 0x710ebe3279d5c513 0t0 TCP 10.10.10.3:443->10.10.10.1:52602 (ESTABLISHED)
mailserve 111 root 155u IPv4 0x710ebe3279e9db93 0t0 TCP 10.10.10.3:80 (LISTEN)
mailserve 111 root 156u IPv4 0x710ebe3279e9c9f3 0t0 TCP 10.10.10.3:8800 (LISTEN)
mailserve 111 root 160u IPv4 0x710ebe3279e9b853 0t0 TCP 10.10.10.3:443 (LISTEN)
mailserve 111 root 163u IPv4 0x710ebe3279e9a6b3 0t0 TCP 10.10.10.3:8843 (LISTEN)
httpd 756 root 5u IPv6 0x710ebe3277771c43 0t0 TCP *:80 (LISTEN)
httpd 756 root 7u IPv6 0x710ebe3277771743 0t0 TCP *:443 (LISTEN)
httpd 756 root 9u IPv6 0x710ebe3277771243 0t0 TCP *:8008 (LISTEN)
httpd 756 root 11u IPv6 0x710ebe3277770d43 0t0 TCP *:8800 (LISTEN)
httpd 756 root 15u IPv6 0x710ebe3285c60743 0t0 TCP *:8843 (LISTEN)
httpd 764 _www 5u IPv6 0x710ebe3277771c43 0t0 TCP *:80 (LISTEN)
httpd 764 _www 7u IPv6 0x710ebe3277771743 0t0 TCP *:443 (LISTEN)
httpd 764 _www 9u IPv6 0x710ebe3277771243 0t0 TCP *:8008 (LISTEN)
httpd 764 _www 11u IPv6 0x710ebe3277770d43 0t0 TCP *:8800 (LISTEN)
httpd 764 _www 15u IPv6 0x710ebe3285c60743 0t0 TCP *:8843 (LISTEN)
httpd 765 _www 5u IPv6 0x710ebe3277771c43 0t0 TCP *:80 (LISTEN)
httpd 765 _www 7u IPv6 0x710ebe3277771743 0t0 TCP *:443 (LISTEN)
httpd 765 _www 9u IPv6 0x710ebe3277771243 0t0 TCP *:8008 (LISTEN)
httpd 765 _www 11u IPv6 0x710ebe3277770d43 0t0 TCP *:8800 (LISTEN)
httpd 765 _www 15u IPv6 0x710ebe3285c60743 0t0 TCP *:8843 (LISTEN)
httpd 766 _www 5u IPv6 0x710ebe3277771c43 0t0 TCP *:80 (LISTEN)
httpd 766 _www 7u IPv6 0x710ebe3277771743 0t0 TCP *:443 (LISTEN)
httpd 766 _www 9u IPv6 0x710ebe3277771243 0t0 TCP *:8008 (LISTEN)
httpd 766 _www 11u IPv6 0x710ebe3277770d43 0t0 TCP *:8800 (LISTEN)
httpd 766 _www 15u IPv6 0x710ebe3285c60743 0t0 TCP *:8843 (LISTEN)
httpd 767 _www 5u IPv6 0x710ebe3277771c43 0t0 TCP *:80 (LISTEN)
httpd 767 _www 7u IPv6 0x710ebe3277771743 0t0 TCP *:443 (LISTEN)
httpd 767 _www 9u IPv6 0x710ebe3277771243 0t0 TCP *:8008 (LISTEN)
httpd 767 _www 11u IPv6 0x710ebe3277770d43 0t0 TCP *:8800 (LISTEN)
httpd 767 _www 15u IPv6 0x710ebe3285c60743 0t0 TCP *:8843 (LISTEN)
httpd 768 _www 5u IPv6 0x710ebe3277771c43 0t0 TCP *:80 (LISTEN)
httpd 768 _www 7u IPv6 0x710ebe3277771743 0t0 TCP *:443 (LISTEN)
httpd 768 _www 9u IPv6 0x710ebe3277771243 0t0 TCP *:8008 (LISTEN)
httpd 768 _www 11u IPv6 0x710ebe3277770d43 0t0 TCP *:8800 (LISTEN)
httpd 768 _www 15u IPv6 0x710ebe3285c60743 0t0 TCP *:8843 (LISTEN)
httpd 820 _www 5u IPv6 0x710ebe3277771c43 0t0 TCP *:80 (LISTEN)
httpd 820 _www 7u IPv6 0x710ebe3277771743 0t0 TCP *:443 (LISTEN)
httpd 820 _www 9u IPv6 0x710ebe3277771243 0t0 TCP *:8008 (LISTEN)
httpd 820 _www 11u IPv6 0x710ebe3277770d43 0t0 TCP *:8800 (LISTEN)
httpd 820 _www 15u IPv6 0x710ebe3285c60743 0t0 TCP *:8843 (LISTEN)
I Woke Up and I Was a SysAdmin
Hey Guys,
For those of you who don’t know, I have stumbled into a career in IT. It was completely unplanned, at first not quite welcomed, but now warmly embraced. I worked for almost 7 years in a company where I went from being a Casual Retail Associate up to a Senior National Solutions and Support Manager. During that time I transitioned from simple sales to complex consultation and ever increased in technical knowledge along the way. A couple years ago I made the jump to a full time “tech” role and haven’t looked back.
For those who don’t know what as SysAdmin is…. its a Systems Administration aka, person who manages IT systems. If you are SysAdmin, excuse my explanation….
So anyway, all of this to say, I am going to start posting nerdy tech stuff because I like it and (from what people have told me) I’m good at it. Over the last couple years I have build complex workflows from the ground up and read a lot of tips and tricks from great minds like Charles Edge (http://krypted.com/) and Tim Sutton (https://macops.ca)… and of course “Googling”…
Stay tuned for some nerdy posts.
Hey All,
Been a while since I have wiped the dust for the electronic blogging canvas.
Here is a great company I have stumbled on that I thought would be worth sharing:
TAL – Togu Audio Line is a small company founded around the year 2000 and located in switzerland. We create high quality instruments and effects with a user friendly interface for reasonable pricing. Our commercial vintage synth emulations are known to be very accurate and authentic. TAL software is in use by popular producers, musicians and mentioned in different interviews.
We are not a marketing driven company and we also do not spend a lot of time into a secure copy protection systems. Our main focus is on the product itself.
Check out these great plugins, test them out, and throw a few dollars their way if you like the merch 🙂