AI as tech support

While choosing and installing a new operating system, configuring hardware, and setting up software, I relied on basic troubleshooting to get things working. I also experimented with several large language models (AI), with mixed results. In this article, I share those experiences to help you use AI effectively and avoid the common pitfalls I encountered.

Knowledgeable Intern

Treat AI as a knowledgeable intern who needs constant supervision. It can explain concepts, interpret logs, and suggest options, but cannot troubleshoot independently. AI is prone to errors and outdated information, so guide it carefully, verify its suggestions, and maintain your own troubleshooting discipline. Below, we cover AI’s strengths, limitations, and best practices for getting the most from its support.

Strengths of AI

  • Suggesting alternatives for well-known software categories
  • Explaining logs, errors, or configuration files
  • Conceptual understanding of protocols or system components
  • Translating technical documentation into plain language
  • Comparing tools or configurations objectively
  • Pre-flight checklists and known error patterns

Limitations of AI

  • AI has a tendency to confirm whatever you present to it
  • AI training data may be outdated
  • Poor reasoning beyond 1–2 steps
  • Jumping directly to fixes without discipline
  • Command line / terminal bias - may overlook solutions in the graphical user interface
  • Potentially outdated or unmaintained software suggestions
  • Mistakes in commands or instructions; may not spot its own errors

slip-trip-fall-banner

Good practices

  1. State facts clearly – Describe exactly what you observe, not what you assume is happening. Provide a solid problem description and relevant context. For guidance, read the section "How to get effective support" in this article: Troubleshooting and finding support
  2. Follow the troubleshooting ABCs – Keep your AI assistant on track. Only move to the next step when you are confident you are ready. Refer to the troubleshooting ABCs here: Troubleshooting and finding support
  3. Prevent confirmation bias – instead of asking “is this configuration correct?” ask: “identify the errors in this configuration.”
  4. Insist on version-specific sources – Ask AI to substantiate its claim by referencing its information sources where possible.
  5. Check commands before execution – Especially those requiring elevated privileges.
  6. Take notes and maintain restore points – Track actions so you can backtrack if necessary.
  7. Ask GUI vs CLI – If using Linux, explicitly ask for KDE-native (Qt) solutions before defaulting to terminal commands.

⚠️ Warning: before changing configuration files or running scripts suggested by an LLM, ask for the original information source, make sure you understand what each step does, and have a rollback plan in place before you begin.

Question types

AI performs differently depending on the type of question you ask:

  • Verification – The most dependable category. Syntax and configuration questions are well-represented in training data. For example, "identify where this configuration deviate from good practice?" is generally reliable, though reliability decreases with niche software or recent version changes. When reviewing configuration files, ask AI to identify problems or inconsistencies rather than to confirm everything is correct — this works against its tendency toward confirmation bias.
  • Diagnostic – Reliable for common, well-documented symptoms. Questions like "What are possible causes of [symptom]?" are safe because they request a list, not a conclusion. Ask AI to eliminate potential causes one by one rather than converge on a single answer too quickly. If more data is needed, ask AI what information would help isolate the cause before moving to diagnosis. Version-specific or obscure issues will require additional verification, as AI may suggest plausible-sounding but incorrect causes.
  • Scoping – Useful for mainstream software. Questions about log locations or required information are generally accurate but may be unreliable for less common tools or distributions. When a diagnosis points to a known software component in your setup, provide that context explicitly — AI cannot know your configuration unless you share it.
  • Explanation – This category requires the most verification. Questions like "Why does X cause Y?" or "What is the relationship between A and B?" invite causal reasoning, where AI is weakest. Treat answers as hypothesis-generators, not authoritative conclusions, and verify them independently. Ask AI to search for documented reports of similar issues from other users — this can confirm whether a problem is known and whether a proposed fix is validated.

💡 AI has a tendency to confirm whatever you ask from it. Leverage this behaviour by asking AI to search for errors, inconsistencies, or potential issues. This usually produces more reliable results than simply asking for confirmation that there are no errors. What also works well is asking AI to refer you to a reputable source.

Case: Bluetooth failure

When my Bluetooth stopped responding — the enable button in Linux simply did nothing — I used AI as a structured data-gathering tool rather than asking it to diagnose immediately.

I started by giving the AI my system configuration, the symptom, what I had already tried (a reboot had not helped), and recent changes. I could not remember whether any updates had preceded the problem — so I made sure to share this explicitly. I then asked for instructions on what data to gather before attempting any diagnosis. The AI complied, though it occasionally forgot a detail or suggested an unnecessary filtering step, which I corrected myself. I continued collecting information and asking what more data I should gather, until the AI indicated that sufficient data had been collected to begin analysing potential causes.

💡 Had the AI jumped to potential causes without prior notice, I would have challenged it by asking what information we could still be missing — keeping it in the data collection phase a little longer, and avoiding the confirmation bias that a question like "do we have sufficient information?" can cause.

To start analysing the cause, I asked the AI to eliminate potential causes rather than identify a single likely one. Two remained: driver or hardware. I then asked what additional data would help eliminate further causes, gathered it, and fed the results back. The AI narrowed the picture further, suggesting the hardware was either defective or stuck in a disabled state.

That second possibility reminded me that I had configured TLP — a power management tool — to disable Bluetooth when unused. I shared this with the AI. It proposed temporarily disabling TLP and rebooting. Before acting, I asked whether the reboot would reactivate TLP — deliberately framing the question to invite a positive response, because I needed to verify the opposite. The AI confirmed it would not. Satisfied the action was low-risk and easily reversed, I proceeded. After the reboot, Bluetooth was restored.

With the issue fixed, I wanted to reproduce the problem to validate that I had tackled the root cause. I re-enabled TLP and rebooted, but the problem did not return. To test whether TLP could still be the cause, I invited the opposite response again: I told the AI the issue was resolved with TLP running and asked it to confirm that TLP was now eliminated as a cause. The answer was a reasoned no.

I then copied my full TLP configuration into the AI and asked it to identify any problems — including those seemingly unrelated to Bluetooth. This framing was deliberate, to avoid missing issues due to the AI's positive confirmation bias. The AI identified two issues immediately and cited its sources. I verified both through the referenced documentation and applied the fixes.

As a final step — later than ideal — I asked the AI to search for reports from other Lenovo X1 Carbon users with similar symptoms. Results pointed to two known causes: a Linux kernel regression and a TLP configuration issue. The fixes I had already applied addressed both. I was unable to reproduce the problem, so strictly speaking I did not definitively establish a root cause. But given the low impact should the failure recur and the corroborating information found online, I was sufficiently satisfied that the issue was resolved.

💡 In hindsight, I recalled that I had asked the AI to check my TLP configuration before — but I had forgotten to mention the TLP version at the time. So: make sure to include version numbers in the context you provide.

Closing thoughts

Using AI for support has been a learning process. Open-ended or leading questions often led it astray, sometimes giving plainly wrong answers. In one instance I overlooked a flawed command given by the LLM that cost me a lot of time to correct.

The key was understanding AI's limitations: confirmation bias, overconfidence, and limited multi-step reasoning. Once I adjusted my approach, AI became genuinely helpful—finding suitable hardware and software, locating documentation, exploring possible causes, and checking if an issue was already known.

By applying the ABCs of troubleshooting myself and using AI strictly as an assistant to gather information, check for errors, generate ideas, and find documented solutions, I reduced wasted effort while staying in control. Whether working on operating systems, applications, or online services, keeping the human in charge proved essential.

Comment and discuss this article...

Previous Post Next Post