While choosing and installing a new operating system, configuring hardware, and setting up software, I relied on basic troubleshooting to get things working. I also experimented with several large language models (AI), with mixed results. In this article, I share those experiences to help you use AI effectively and avoid the common pitfalls I encountered.
Treat AI as a knowledgeable intern who needs constant supervision. It can explain concepts, interpret logs, and suggest options, but cannot troubleshoot independently. AI is prone to errors and outdated information, so guide it carefully, verify its suggestions, and maintain your own troubleshooting discipline. Below, we cover AI’s strengths, limitations, and best practices for getting the most from its support.
Strengths of AI
Limitations of AI

⚠️ Warning: before changing configuration files or running scripts suggested by an LLM, ask for the original information source, make sure you understand what each step does, and have a rollback plan in place before you begin.
AI performs differently depending on the type of question you ask:
💡 AI has a tendency to confirm whatever you ask from it. Leverage this behaviour by asking AI to search for errors, inconsistencies, or potential issues. This usually produces more reliable results than simply asking for confirmation that there are no errors. What also works well is asking AI to refer you to a reputable source.
When my Bluetooth stopped responding — the enable button in Linux simply did nothing — I used AI as a structured data-gathering tool rather than asking it to diagnose immediately.
I started by giving the AI my system configuration, the symptom, what I had already tried (a reboot had not helped), and recent changes. I could not remember whether any updates had preceded the problem — so I made sure to share this explicitly. I then asked for instructions on what data to gather before attempting any diagnosis. The AI complied, though it occasionally forgot a detail or suggested an unnecessary filtering step, which I corrected myself. I continued collecting information and asking what more data I should gather, until the AI indicated that sufficient data had been collected to begin analysing potential causes.
💡 Had the AI jumped to potential causes without prior notice, I would have challenged it by asking what information we could still be missing — keeping it in the data collection phase a little longer, and avoiding the confirmation bias that a question like "do we have sufficient information?" can cause.
To start analysing the cause, I asked the AI to eliminate potential causes rather than identify a single likely one. Two remained: driver or hardware. I then asked what additional data would help eliminate further causes, gathered it, and fed the results back. The AI narrowed the picture further, suggesting the hardware was either defective or stuck in a disabled state.
That second possibility reminded me that I had configured TLP — a power management tool — to disable Bluetooth when unused. I shared this with the AI. It proposed temporarily disabling TLP and rebooting. Before acting, I asked whether the reboot would reactivate TLP — deliberately framing the question to invite a positive response, because I needed to verify the opposite. The AI confirmed it would not. Satisfied the action was low-risk and easily reversed, I proceeded. After the reboot, Bluetooth was restored.
With the issue fixed, I wanted to reproduce the problem to validate that I had tackled the root cause. I re-enabled TLP and rebooted, but the problem did not return. To test whether TLP could still be the cause, I invited the opposite response again: I told the AI the issue was resolved with TLP running and asked it to confirm that TLP was now eliminated as a cause. The answer was a reasoned no.
I then copied my full TLP configuration into the AI and asked it to identify any problems — including those seemingly unrelated to Bluetooth. This framing was deliberate, to avoid missing issues due to the AI's positive confirmation bias. The AI identified two issues immediately and cited its sources. I verified both through the referenced documentation and applied the fixes.
As a final step — later than ideal — I asked the AI to search for reports from other Lenovo X1 Carbon users with similar symptoms. Results pointed to two known causes: a Linux kernel regression and a TLP configuration issue. The fixes I had already applied addressed both. I was unable to reproduce the problem, so strictly speaking I did not definitively establish a root cause. But given the low impact should the failure recur and the corroborating information found online, I was sufficiently satisfied that the issue was resolved.
💡 In hindsight, I recalled that I had asked the AI to check my TLP configuration before — but I had forgotten to mention the TLP version at the time. So: make sure to include version numbers in the context you provide.
Using AI for support has been a learning process. Open-ended or leading questions often led it astray, sometimes giving plainly wrong answers. In one instance I overlooked a flawed command given by the LLM that cost me a lot of time to correct.
The key was understanding AI's limitations: confirmation bias, overconfidence, and limited multi-step reasoning. Once I adjusted my approach, AI became genuinely helpful—finding suitable hardware and software, locating documentation, exploring possible causes, and checking if an issue was already known.
By applying the ABCs of troubleshooting myself and using AI strictly as an assistant to gather information, check for errors, generate ideas, and find documented solutions, I reduced wasted effort while staying in control. Whether working on operating systems, applications, or online services, keeping the human in charge proved essential.