How to learn to drive a stick shift in 2020

#1. Call the cops on yourself for grand theft.

#2. Hop into a car with a stick shift. Preferably one that looks like it’s well-insured.

#3. Drive it like you stole it… because you are stealing it.

#4. Profit or fail.

You either know how to drive a stick now, or at the very least not to drive one…and if you failed you have some time to think about what you can do better now that you’re in jail.

Ab Irato – 2000

I unearthed this set of songs recently. It’s from 1999-2000. This is the musical portion to my senior thesis (Div III) at Hampshire College. There are other songs that I need to pull off the CD I have. I found the CD moving my mother out of her house after my father passed away. All of the sounds were programmed from scratch, as far as I remember. I actually wrote code to do this project. It’s an awful way to compose music.

The songs are mastered for car stereos, which was pretty how much everyone listened to music 20 years ago. The songs sound decent through a nice set of headphones too. I’m still very proud of this. It took me a long time.

Believe it or not, I made these when audio tapes were still being made, and only nerds had mp3s.

Coding Cat

This is a stupid simple trick for debugging code. It nearly never fails, and allows you to keep your cool and not lose it when going through your code.

I used to work with a friend that asked me one day to come over to his desk and look at something. He asked me by saying, “Be my cat for a second.”

I looked at him blankly and he said, “Just sit on the desk next to the monitor.”

I most likely replied, “Are you going to paint me like one of your French girls?”

He then proceeded to go through his code, explaining to me how it worked. I just nodded and eventually he found his error and said I could leave.
I now have a picture of a cat at my workstation that is my coding partner. If something doesn’t work. I explain my code to the picture until I find the error.

Your coding cat is anything you want to explain your code to while you’re debugging. The picture attached here is my real life cat, Bacchus. I ask him to help me, but he likes to chew on my monitors and knock things off my desk.

What most people don’t get is how frustrating it is to work on something for hours to have nothing happen, or get a completely unexpected result. Coding and development are not the cut and dried process non-coders think. Do yourself a favor and don’t drive yourself crazy. This stupid little trick will save a little sanity. If you’re working in an office with others it might also make you a little more interesting. Because, let’s face it, you’re a very unstable and boring person who talks to pictures of cats instead of real people.

Quick Tip – Security as a Talking Point

Unless you *know* your company’s systems are extremely secure, don’t go around the Internet or in public telling people their data is secure with you.

If you don’t know about security and are using it as a talking point because it’s important to people, don’t talk about it. You are inviting criminals to attack you and your systems just for the fun of it. Your systems are primarily secure because you’re not being actively targeted. If someone or a group wants to get into your systems, they’ll find a way. Don’t put a target on yourself. It will cost you and your business.

AI Isn't Coming for Our Jobs, but It Will Change Them.

Let’s be clear AI is not a threat to most people’s jobs. AI and software will be created to augment current roles in offices today. We’re not going to get rid of nurses, lawyers, accountants, welders, etc. We’re going to augment them with new software and new algorithms. New jobs will be created that will take the place of current jobs that are going to disappear, or just maybe… your job will get easier. Wouldn’t that be nice?

Unless something goes incredibly wrong, or right, depending on whom you ask and we end up in an incredible dystopian wonderland, software will always be developed by people for people to make our lives better and easier.

If there’s anything I know, humans make mistakes. Tons and tons of mistakes. Software is never going to be self-correcting when it has to do with anything other than syntax, and will always have human authors, software developers, business owners, etc. The problem with having AI that is static, or even follows trends is that it will always follow a trend, and has to be watched after because the world is ever-changing. What is a clear down the road view of intended consequences for an AI algorithm today is or may be completely “wrong” and “bad” in the near future.

Take for example credit applications. Those applications are not done by hand and haven’t been for a long while. I’m sure there are smaller institutions that do look through them by hand, but not many. Application acceptance and denial are not done by hand for credit, at least on the first pass. That’s right, there are programs that take into account all of the variables about the applicant and their circumstances. The program uses rules, written by someone at some point. If your application doesn’t meet certain thresholds for those rules, you are denied credit, offered alternatives or different terms for the credit you are seeking. It’s completely acceptable for a company to use as an algorithm for credit applicants. The bank knows how much risk it’s willing to take on for the reward, and that of course changes based on a large amount of different criteria. It changes from day-to-day, town-to-town, state-to-state, country-to-country, etc. depending upon a ton of things about the applicant and the institution they’re applying to.

So the algorithm that is in the software above is based on “business rules”. Those rules are like a huge ever-changing filter. They were first written by hand, years ago. As one value goes up, another goes down, etc. In the end you’re given a score on your application and based on that score, you’re either accepted or denied for credit.

How does AI fit in here? Well… you see that set of business rules in the example above? It turns out we can automate the writing of that incredibly complex set of business rules by writing an AI. In order to get an algorithm for we need a set of old data we can feed the AI with. We already know all of the parameters for the “credit application”. If we’re an established bank, we have years of examples of loans that were given and how they were paid back, etc. Those examples, are our data set. To make it simple, we put them together and the “AI” creates us a new algorithm that fits the lender’s needs for credit applicants. Throw in the magic of AI, software, nerds and.. Voila! A new algorithm is born! This is what is commonly referred to as AI for most businesses. It’s far more efficient than a human would be at going through all of those applications and can make decisions based on targets to get us there, etc. It’s not emotional, and since business is about making money it’s created with it’s eye on the prize.

Now… there will still be bankers. If you get denied for credit, you can always take your business elsewhere. Smaller local lenders tend to be nicer about these things and take more risks. You could go into said bank and try to make a case for yourself as to why they should lend to you. There are all sorts of ways to get credit. That’s not the point though. The point IS… That AI has augmented or simplified the process of approving or denying credit for the bank. It hasn’t gotten rid of any bankers that still are there to service customers. Those bankers in your local branch have just taken on different responsibilities. Sure, there are probably a few less bankers but by and large AI is there to help, not replace people.

As time progresses, as it is want to do, we’ll find that jobs will shift as some jobs are replaced by software. Take robotics for example. Robots will take jobs, but writing software for and repairing those robots will become an in demand job skill. Like I noted earlier, people make mistakes. Lots and lots of mistakes. It’s ok. To err is human, as they say. We’ll be making new things and new processes until we’re destroyed or become one with machines, or whatever happens. We’ll use the tools we create or currently have at our disposal to make new things as we always have.

Embracing the future is an everyday process. Don’t let others drive you to fear the future. As far as we know the past is not coming back. Don’t fear the unknown, be prepared for change. Look at the work you do. Try and figure out how you could make it easier for yourself or others to accomplish. Why not make those changes yourself and sell them to others? If you don’t do it, someone else will. Maybe you don’t want to do that. Go search out a solution someone has already made? It’s not scary like some would like to make you think.

A Quick Lesson on Pipeline Architecture

While not many people talk about production pipelines in their every day work, they’re a factor in any and every industry out there. Every product you have ever used, every meal you have ever eaten, everything you have ever accomplished has had some sort of pipeline. Whether or not the production of said thing was designed or just happened, it had a pipeline.

Optimization is exciting.

Pipelines are what fascinate me about the world. Top to bottom maps of products in, product out are exciting. If you analyze pipelines from one business, you can take the lessons learned from other seemingly unrelated businesses. If you’re not learning lessons from and stealing from others, you’re probably doing it wrong.

Alternatively, as part of Toyota’s philosophy is a process of continuous improvement called Kaizen. The concept is essentially that the people doing the work, make continuous improvements in their process. Everyone is working towards the same goal and everyone has input on how to make everything work more efficiently. There’s more to it. Read about it at the website for the Kaizen Institute.

https://www.zdnet.com/article/google-deepminds-sideways-takes-a-page-from-computer-architecture/

The article I’m linking to above is talking about something called back-propagation in Deep Learning. For the purposes of the article, you don’t need to understand anything about neural networks. The article vaguely describes what they are and how lessons already used in modern computer chip architecture can be used to speed up AI processing. The beauty of the article lies in that while your business may not have anything to do with computers, processor architecture has some great lessons that you can apply to your business and efficiency. It very quickly describes how you could change and adapt your product pipeline to be more productive with the same amount of team members.

Understanding Machine Learning: From Theory to Algorithms – Book

This is a book from 2014 that is a decent starting point for diving into ML(Machine Learning.) There are exercises included in the book, and it seems to do a fairly good job of starting the reader from a point of almost no knowledge of ML to more complex subjects, many of which many people will never use. Don’t be fooled by the format. This is definitely a course textbook.

Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering.

In all honesty, I’ve only browsed through parts of the book. It’s on my reading list, but like many people the sheer length of my reading list is far beyond any distance I’ll live long enough to travel.

The nice part about this book is that it’s free. If you do or don’t like it you don’t ever have to pay for it. There is a link to a full pdf copy of the book, free for individual use. There are also links to a Solution Manual in PDF, and some of the classes that have used the book.

https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/index.html

If you like the book, please consider purchasing it at the link below.

https://www.amazon.com/Understanding-Machine-Learning-Theory-Algorithms/