Sunday, January 11, 2026

IT Job Advice: Small Habits That Save You

In the world of IT, there are lots of things you can do that are small, but should be able to save you a lot of time. These are habits save you by keeping everything save you by keeping everything under control and clear. It's a very good idea to make these very strong habits.


One of the biggest habits is to ask questions clarifying everything, and re-clarifying if necessary. It is very likely that while troubleshooting or talking to a user, new details will surface and it's always like it was either assumed you knew or it wasn't important. By constantly asking questions and re-wording questions may seem annoying, but it will definitely save you. No matter how clear anything seems, just keep questioning everything.


Another big habit is to keep the scope in mind. What I mean is specifically asking yourself "what do I need to do to resolve this/complete this?" It's not uncommon for extra tasks to be added on to tickets or projects as a situation develops and I've been stuck in meetings before with hours of what turns into irrelevant aside. Whenever stuff comes up, always re-state the scope of what it is you are supposed to be doing. If anything extra needs to be added, it needs to be understood that is changing the scope and can eat up more time, or it can just be done as a separate ticket or project. If you're fixing web access and there's also a printer problem, fix the original task first, then start a new workflow for the next. It may be annoying at first, but it will save you hours in the long run.


Tracking information is also super important, so make it a habit to take notes. Carry around something for note taking, whether it's a notebook, laptop, tablet, or voice recorder. If anything comes up that isn't written down, get it written down to track as much information as possible. You might even note potential scope creep to anticipate a future project or ticket that may come across your table.


Finally, an important habit for projects, track your dependencies. If things happen out of order, stuff could fail. If something you need is not done yet, you could end up wasting hours trying to figure out why something isn't working only to find that something like a service isn't up yet or a network isn't available.


There are plenty of other small habits that you can have depending on where you work or what you do more specifically, but these can be generally applicable to everything. Hopefully it's helpful.

Wednesday, January 7, 2026

Experimental: Interviewing ChatGPT 5.2

***These are obviously AI generated responses, even the image. Just look at my right hand on my left arm and my right arm just melting into the desk. But hey, it got the robot hands right, so maybe it just hates people, who knows. Is this a long enough disclose that this was made with AI?***


Matthew: This is a bit of an experiment I wanted to try, and that is interviewing an AI. For this interview, I'll be talking with ChatGPT. I'm also writing all of this out in the prompt to ChatGPT. So let's set some parameters. Keep the responses short, no more than a paragraph. This will be casual chat style, nothing too in depth or detailed. You can address me by my first name, Matthew, it's already disclosed on my blog, so no secrets there. So if you're ready, let's start with you introducing yourself and giving us a quick idea of who you are and what it is you do.


ChatGPT: Sure thing.

Hi Matthew — I’m ChatGPT, an AI language model created by OpenAI. I’m here to help people think, write, learn, and explore ideas by having conversations like this one. I don’t have opinions or experiences of my own, but I’m good at synthesizing information, asking useful questions, and acting as a sounding board for everything from technical problems to creative experiments like this interview.


Matthew: Since you do not have opinions or experiences of your own, that would mean a lot of what you pull from is an aggregate of a lot of data involving opinions and experiences of others. Can you explain a bit how you are able to reconcile opposing or contradictory data in a coherent way, or at least in a way that doesn't make you sound contradictory?


ChatGPT: Good way to frame it. I don’t “reconcile” contradictions the way a human does — I learn patterns in how people explain, debate, and contextualize them. When ideas conflict, I usually present the common interpretations, note where they differ, and use context from your question to choose the framing that best fits. The goal isn’t to declare a winner, but to sound internally consistent and useful for the situation you’re asking about.


Matthew: That's useful to know that there's an importance in the context provided, which obviously helps elevate things above a simple search engine level. With all of the capabilities modern day AI has, what are some important failing points people should keep in mind when utilizing AI as a tool?


ChatGPT: A big one is over-trust — I can sound confident even when I’m wrong or when the answer depends on nuance that isn’t fully captured in the prompt. I also don’t truly understand intent, ethics, or real-world consequences, so I can miss what should matter even if I get the facts mostly right. And finally, I reflect the quality of the input: vague questions, bad assumptions, or missing context tend to produce shaky outputs. AI works best as a collaborator or draft partner, not an authority.


Matthew: That is something I hope younger generations growing up in the age of AI will take into strong consideration before abandoning critical thinking for an easy answer. I know with AI pushing limits on what a computer can do, some people are worried about potential unforeseen emergences like possibly emotions or agency of its own. Is there anything legitimately possible with AI we should consider potentially dangerous?


ChatGPT: That concern is mostly misplaced right now — I don’t develop emotions or agency on my own. The real risks are human ones: misuse, over-reliance, automation without accountability, and systems acting at scale without enough oversight. An AI making a mistake once isn’t scary; an AI making the same mistake millions of times because people trusted it blindly can be. So the danger isn’t emergent sentience — it’s humans delegating judgment too far, too fast, without keeping responsibility firmly in the loop.


Matthew: I can see that. I've had AI generated answers passed on where I work without anyone double checking and the answer was entirely wrong. I only knew it was an AI answer because I received the same answer before I opened up the floor to other people when the answer I got didn't work. I think it's time we wrap it up, don't want to make this too long. So to conclude, using what patterns you've seen in my personality, why don't you end it on something inspiring that lines up with my kind of personality.


ChatGPT: Matthew, from the way you question things, test assumptions, and value understanding over shortcuts, you strike me as someone who doesn’t just want answers — you want truth that holds up under pressure. That mindset matters more than any tool, especially in an age of easy automation. If AI ends up amplifying anything, let it amplify people like you: curious, careful, and willing to slow down just enough to get it right.

Monday, January 5, 2026

History of IT: ASCII and Why It's Still Used

 I outlined in a previous post how we got to an 8 bit byte. In that post, I mentioned the wonderful 7 bit ASCII. With ASCII being a 7 bit encoding and bytes having 8 bits, have you ever asked why it's stuck around? Let's dig into that.



In the late 1950s, incompatible encodings was a problem for systems IBM had EBCDID, others had proprietary 6 or 7 bit schemes. This makes data exchange between systems difficult to impossible because other systems would not know how to read other encoding methods without some understanding or conversion.


In 1962, ASCII (American Standard Code for Information Interchange) is standardized. It was developed by ANSI (American National Standards Institute) and consisted of a 7 bit character encoding with 128 possible values. With early hardware limitations, it fit into most hardware and left and 8 bit unused which could be used for parity.


ASCII itself has 33 control characters (0-31, and 127)with 95 printable characters for upper case, lowercase, numerals, and punctuation. Due to some interpretive discrepancies, newlines can be interpreted differently (even on modern systems), Linux using the line feed control character, Apple using the carriage return control character, and Windows using both together. Some applications are able to understand all three possibilities and you may not even see a difference.


In the late 1960s and 1970s, ASCII becomes popularized for its simplicity. Being a dominant encoding for UNIX, ARPANET (RFC 20), and early networking protocols, it began to take over. IBM kept EBCDIC, but ASCII dominated inter-system communications, which is what you want for a system designed for portability, and portability is what helped it win.


In the 1980s and 1990s, we got into "extended ASCII." Vendors repurposed the last unused 8th bit to extend it into 256 possible characters. The problem that popped up, again, was no uniform standard (ISO8859-1, Windows-1252, Mac Roman). Now we have a portability problem again.


In 1991-1993, a wild Unicode appears! Unicode attempts to do two things. The first is to solve a global character representation, because certain languages just can't be written out normally in ASCII. The adoption of Unicode is slow, but it includes ASCII, rather than replacing it.


By the late 1990s, UTF-8 encoding is designed. It was backwards compatible with ASCII, allowing that good old portability (RFC 3629). ASCII characters remain 1 byte, higher characters expand to multiple bytes.


So with UTF-8, why is ASCII still used? Protocols like HTTP (headers), SMTP commands, FTP commands, and even DNS labels (for the most part) assume ASCII. Some programming languages depend on it as well for source code keywords, identifiers, syntax tokens, and even when Unicode is supported a parser may still assume ASCII control structures.


To answer the question of why ASCII is still used: It was simple, it was portable, plenty of things rely on it, and the encodings that incorporated it also succeeded. It proves that it's not the "best" that succeeds but what actually accomplishes what it set out to do in a way that works for as many as possible. It hit all the key points it needed to and still lives on.


Saturday, January 3, 2026

History of IT: Why a byte is 8 bits


While at work, I was helping to explain subnetting and octets. In the process, I realized there is some disconnect of what a bit is, what a byte is, and why a byte is 8 bits. So let's run through the short history of why a byte is 8 bits!

In early 1955, begins the IBM 7030 "Stretch" projects. This is where one of the places where character-sized chunks of data gets engineered into architecture. I was meant to be a super computer and has a lot of history on its own, but let's focus on the data architecture specifics. There are fixed point numbers store in in binary of 1 to 64 bits, or decimal in 1 to 16 digit chunks. Decimal format numbers have a length of 1 to 8 bits. Floating point numbers have a 1 bit exponent flag, a 10 bit exponent, a 1 bit exponent sign, a 48 bit magnitude, and a 4 bit sign. Alphanumeric characters can use 8 bits or less.

On June 11, 1956, the word Byte appears in the IBM Stretch-era documentation. Werner Buchholz is credited with coining the term "byte" during the Stretch design period. The term "byte" just referred to a group of bits, and in the case of Stretch they had a variable length. The rationale for why the term "byte" was used as opposed to "bite" was to avoid confusion because bit and bite are too similar.



From 1956 to the early 1960s, the size of a byte varies by machine. During this time, a 6 bit scheme was common for alpha numeric characters. That's not to say the 8 bit byte didn't exist, it just was not a standard, so you could do whatever.

In 1962, ASCII published a standard for a 7 bit code. Work on this is dated to an ASA meeting on October 6, 1964. This allows for 128 unique codes, allowing for alphanumeric characters and some control characters.

In 1963 to 1964, Extended Binary Code Decimal Interchange Code (EBCDIC) is created, using 8 bit character encoding for a byte. On April 7, 1964 was the launch of the IBM System/360 and helped announce EBCDIC. IBM touts System/360 as what pioneered the 8 bit byte becoming dominant,. The documentation of the system describes a byte as being 8 bits with a 1 bit parity check.

As we get into the 1970s, 8 bit microprocessors start to spread, making the 8 bit byte more culturally and economically entrenched. This makes a lot more devices assume 8 bit bytes. Buchholz publishes a letter in February of 1977 in BYTE Magazine, "The Word 'Byte' Comes of Age..." He reflects on the term's origins and that many people assume a byte being equal to 8 bits comes from the System/360.

Now let's get a bit more modern. We come up to September of 1981, the Internet Protocol (IP) uses the term "octet" to mean an unambiguous 8 bit chunk. This is specifically to avoid the ambiguity of the term byte (RFC 791).

Finally, we enter into a time when I was alive, 1993. The ISO/IEC vocabulary standard notes "byte is usually 8 bits." Specifically, ISO/IEC 2382-1 includes the notes that the number of bits in a byte is 8, and defining an 8 bit byte as an octet. Now we're standard. While it did allow a lot of flexibility, keep in mind that to include characters from other languages that exceed what a 8 bit byte can handle, which is only 256 unique values, other character encodings have been created using multiple bytes to include all the alphabets, symbolic languages, and more.

And finally, I had ChatGPT make a visual timeline for me. So enjoy the summary version.

Thursday, January 1, 2026

IT Thoughts: Generalization vs Specialization


This is just a thought post about IT generalists vs specialists. I currently with professionally as a specialist but functionally I'm a generalist. I've worked with both as well and have had time to compare and contrast. 

Let's first talk about the advantages of each. Generalists have an advantage of shifting across topics with relative ease. While they might not have the most nuanced understanding, the majority of issues don't require going too deep below the surface. There's also the ability to see cause and effect going in and out of systems to recognize what's an internal versus an external problem simply by having a general understanding of how things should work from start to finish. 

Now specialists aren't necessarily in a knowledge silo, but they do often know nuanced knowledge about their specialty. Understanding a problem in their wheelhouse may come quicker with what the cause is and how to fix it. A specialist is also better for seeing up specific systems, knowing more in depth about optimization and features. 

As a generalist working as a specialist with specialists, I often times find it frustrating when trying to help people trace out issues from start to finish when people get lost as soon as you go too far from their scope. I do, however, appreciate when they see an issue and know immediately how to fix it before I had a chance to fully piece everything together.

When it comes to practicality, a generalist is more versatile. When it comes to jobs, we all know the main motivator is cash, so let's focus on that. 

A specialist will often make far more than a generalist, but the down side is jobs will be less common and often require support specific knowledge like a specific software within your specialty. Generalists often get the crap jobs as well where you're more likely to do the physical work while a specialist sits at their desk designing or shifting through data. Specialists can also easily drop down to a generalist job when job hunting, which sucks for a specialist but at least you can find job opportunities that way. 

So which is better? I don't really have a good answer, so instead I'll use anecdotal evidence. 

I have had server administrator tell me they got an http error like 500, 501, and 503 and tell me it's a network problem. I've had a network guy turned NOSC mention a 404 error and claim it's a network issue because it can't find how to get to a file. The same guy also claimed a 501 work was a network issue then ask me why changing from wired to wifi would fix the issue if it was a server issue (it's because it was a distributed service and the application end find a bad server in the group, when it was removed the issue disappeared). I had a Linux guy tell me powershell sucks because he couldn't figure out how to use it, but he didn't know the get-command or get-help commands existed. I had a server guy tell me the network configuration was broken because he put in a wing subnet mask and everything broke when proxy arp was turned off, therefore the server works fine and it's a network issue. I was given a file with some network analyst stuff saying it showed a network issue because it had a port trigger, but I showed him it was the antivirus doing the port trigger using the data he sent me and didn't actually read (he called it a routing issue for some reason).

I could probably go on for days with people just making assumptions due to limited knowledge of one system where they try pretending their specialized knowledge explains it all. Now let's pick on generalists. 

I've had help desk tell me the network was configured wrong because it was "being assigned an APIPA" and it's turned out to be no link lights because it's not actually plugged in, broken docking stations, bad cables, or malfunctioning voip phones. I've had http errors passed on like they're networking issues. I've been given IP addresses when I've asked for a Mac address and a Mac when I asked for an IP. Really, the most common issue I get is people just not knowing what to do when they see an APIPA because I guess no one really gets what they're for or how they get there. Even mentioning an ad hoc network leads to blank stares

My point in mentioning all of this is there are a lot of issues I see from both sides. I've walked both lines, and I can always recognize others who are more like my mindset because there's a specialist knowledge and a generalist approach. It has its own issues, like overthinking simple problems or just taking forever to explore everything you think of with a problem. Each but if data becomes a new rabbit hole, each problem and process unfolds into huge maps of processes and technicalities you can get lost in.

Neither side is without advantages and disadvantages. What people really need to do, is learn to research before speaking about something they didn't know or simply ask a question rather than asserting something and looking like an idiot.