Wednesday, October 22, 2025

My Thoughts On AI Use Cases


Note: My experience if very limited but I couldn’t help but share my experience with AI and toss in my $0.02 in the mix.


I, like others, ignored the AI rage as it first came out.  Sure, I played around a bit, but I had just retired and just wasn't ready for more hype, drama, tech bubbles, and more ...  In addition, I saw the “AI dreaming” and lack of references in the answers early on as I asked to be like researching using Facebook while sitting on the potty.


This year I got serious after watching Dave Plummer (Dave's Garage), Youtube video where he went into research mode on AI.  Dave, a retired Microsoft developer, did his homework and I put a lot of faith in Dave's research and opinion / findings #SmartDude.  I then tried the LLM AI solutions he referred to against a couple of use cases that I had on the burner.  For the most part, my research agreed.


I had three specific use cases.  

  1. I wanted to develop a Python app to modify the logs that come out of my Ham Radio logging app on my phone and prep the data to be loaded into my log-clearinghouse on my PC (ACLOG) in a very geeky way.  I had been modifying the logs from my previous logging app prior to import but the new logging app allowed me to tweek the output using the built-in UI so I had less to do unfortunately.   It was simple, I just wanted to add one more field so that I could import into my PC based app in the way that I wanted. 

  2. My second use case was to play the part of a reviewer of my BLOG articles (I have multiple BLOGs).  I enjoy writing and therefore blogging, but I really need an editor to read my stuff prior to publishing and provide feedback.  At a minimum, my wife or others can catch goofy spelling and other mistakes as well as point out that some areas make no sense.  Getting people to review my tortured writing is a pain and I want to publish sooner than later.  The only way that I can catch these things myself is to put the article on the shelf and wait a few days before I review it again, but I can never catch all the mistakes.

  3. My third use case was like everyone else's, using AI for general research.  The one that I really wanted though was references in the answer to the articles used.  This is critical sometimes so that I can dig in to more understanding the author, their experience, testing methodology, etc.


For both use cases, I tried Chatgpt, Gemini, Claude, GROK and Perplexity.  I ended up following Dave's recommendations and settled on claude for code development, Gemini for my writing reviewer, and Perplexity for general research.



AI As My Reviewer

For writing I was blown away by some of the great suggestions to improve some content that I currently publishing once per week, an eleven tip series on hiking safety.  It not only pointed out when I should clarify some points, but also where I should mention other points that I hadn’t thought of.  Not perfect, but good enough for my current use. I did laugh when it would stroke my already Texas sized ego too.  (WARNING: Gemini has looked at this article).  



AI As A Draft Code Writer

For my Python code, I've always enjoyed designing and writing code but I was a Python padawan at best and hadn't written any code in years.  I looked at the code that came out of each of the AI tools that I tried and was blown away by the fact that they implemented my instructions correctly the first time every time.  The use case was simple and short but a great learning experience.  I chose Claude, because it provided the best code with better exception handling and did a good job at improving it when I asked it to add more exception handling (called prompting in the industry I think).


The code that I ended up with go me 90% to my goal but I didn't tell the AI robot what format the data file was in, and exactly what I wanted.   None of the apps did any error checking of user input either.  I’m guessing that if I had been more pedantic with the design instructions they would have done better job and I could have used the “prompting” method to improve the code with AI.  I still wanted to write a bit of code and play while I was at it.


Hilarity ensued when I worked for 30 minutes (minimum) on a simple "if" statement that I added and it wouldn't run.  I gave up in frustration and uploaded the code to gemini I think, and asked why it wouldn’t run.  The AI app reminded me that my "If" statement can't be capitalized.  A simple mistake that my sub-optimal eyeball and wetware parser couldn't see.  


I added some features which forced me to learn some more Python, and I eventually got the code working just the way I wanted, freezing development.  AI was useful in learning as I issued some example instructions and used the idea of "prompting" to improve or modify the code, sometimes referred to as vibe coding.


Here's the kicker: Later, after doing some more research, I decided to ask AI to develop the code again but told it the format of the file (an obscure format similar to JSON used in Ham Radio called ADIF).  I may have improved my instructions a tad as well.  It blew me away.  Not only did it know what the format was, it autonomously did the following:

  • Wrote a function to parse the file for  what ADI fields where in the file

  • For each record, figure out if it should append each record or modify an existing field

  • Decided on the order of the fields that should be written out

  • Wrote another function to write out the final file. 

  • Included lots of exception handling.

What's most interesting is that it actually improved my shitty design of “just add a field onto the end of each record”.  It decided, correctly, that it should check to see if the field already existed in the ADIF file for example.  I knew that it never would be there but it was a good call.  It also took the approach of breaking down key phases of the process into functions making it easier to maintain.  <Jaw hitting the floor>.  I didn't use the code because I had my solution and had done the work to integrate some custom exception handling and some command line stuff, but it was sooooo cool.  


I downloaded Pycharm python IDE to edit my code.  It came with a trial of a AI service that supports your development as you type.  That experience was almost scary because it seemed to be reading my mind, accurately completing entire lines of code for me.  If I did a lot more development, maybe I would have purchased a subscription.  



Using AI as my Research Assistent

My last use case of AI is general questions and research.  I use "Perplexity" for this because I like the way it provides solid references to it's answers and they have a iPhone app. #Awesome.  


At this time, I’m guessing that using AI for research is probably the most used and very mature.  But, to as more content is written by AI, things could get a little muddy and the feedback loop may melt down the googleplex of processors churning through content.  AI doesn’t really know what is true and what isn’t.  It’s easily fooled if it dips into a deep pool of inaccurate information because it doesn’t evaluate the author in any way, nor can it draw conclusions based on other more valid sources of information.  It’s similar to doing research using Facebook while sitting on the crapper.  Showing the receipts, links to it’s references provides the user with the opportunity of trying to get it right.


In Summary

LLM Use Case Breakdown

Use Case

Preferred LLM(s)

Key Takeaways & Observations

Code Development

Claude

Produced excellent, working Python and the best exception handling on first draft.  Use of prompting (including the ADIF file format) led to an incredibly advanced and efficient solution that improved my design. AI-assisted coding, especially with tools like PyCharm's AI feature, is "almost scary" in its effectiveness.

Writing Review/Editing

Gemini

Blown away by level of suggestions, not just for proofreading, but for content clarification and adding new, relevant points (my ego is still recovering). It acts as a much-needed editor to speed up publishing process.

General Research

Perplexity

Selected specifically because of it’s reliably at provind solid references/citations to its answers, which is critical for evaluating the source material. Use AI carefully for research.


Using AI for code development is here to stay.  Developer friends tell me AI has changed the code development industry, and my son, a computer scientist, says he uses it constantly to write code.  AI is being integrated into entire development tool sets and team practices.  Given the discrete logic, this makes sense.  


To survive in this new environment, get good at prompting or other methods to use this next generation of tools to be productive.


What AI doesn’t do, and I doubt will ever replace, are solution architects.  Architects take business requirements, and their understanding of the integrations, and overall business and customer process to create solutions.  Sure, AI may take a crack at it, but from what I’m hearing, this is turning out to be a disaster for some companies that think they can hire a bunch of vibe coders.  These companies are now looking for higher skilled developers and designers to fix the spaghetti solutions that won’t scale, integrate, or change as needs change.  

Some other thoughts:

  • Prompting (follow-on questions or directives to the AI bot) was amazing

  • AI worked well for my small solution but it was miniscule in the scheme of things when you consider writing an entire application. Certainly more research is required.

  • I see indications that AI falls down when poor design / developers use it to write large apps.  Humans will still best understand the requirements, good design, build in supportability, etc.  The next wave of work will be specialists to come in to unwind apps that were developed in AI that turn into a nightmare to support or change.

  • It doesn't write my articles but saves me from having to talk others into reading my drafts.

  • Perplexity is an awesome research assistant but I still want to see the sources, which it makes easy.


References and More


Research Notes From Daves VLOG on AI

Core notes and input from Dave Plummer (Dave's Garage).  In this section, I took notes from Dave Plummer's video presentation.  Then I tried each one (see above) Applications.

CHATGPT

  • V 4.1

  • Dave rote the C++ but was more broad, conversational, goo co-pilot.

  • More concise on question.

  • Good at random story telling

  • Can work with ~96k words of  context windows (input) text

CLAUDE

  • v3.7 OR 4

  • Claud crushed a C++ problem, complete with make file.  Ranked the highest in Dave's test for code generation.

  • Reasoning, layered response.  Goes deeper.

  • Can work with ~150k words of  context windows (input) text

GEMINI

  • 2.5 pRO

  • Required more prompting, required more precise prompting

  • Plays it safe on news

  • Can work with ~750k words of  context windows (input) text.

  • Able to summarize large files without loosing context.

  • Heavy lifter for summarizing

GROK (owned by Musk)

  • v3

  • Generated code, defaulted to python, explained the code, fast and fun

  • Pulls in trending info, etc.  MIxes commentary, nail public insight. 

  • Best at breaking news. (what happened and how people are reacting to it).

  • Can work with ~750k words of context windows (input) text.  Effective might be much lower.

  • Heavy lifter for summarizing (but watch out if under heavy loads???)

  • [GROK was last in my list]

Dave’s Summary

  • Not a looser in the list

  • Claude for writing code

  • Story telling from CGPT

  • GROK for breaking news

  • Gemini for Documents and massive input Coding, research, news

[Dave did not review Perplexity (that I remember)]


- Chris Claborne

N1CLC