▶ Full Post Text
I've been experimenting with a different workflow lately because I kept catching myself doing the same dumb loop: ask chatgpt, feel convinced for 10 minutes, then 2nd guess everything anyway.
So instead of one model giving me a "final answer", I tried a couple setups that force a research - debate - consensus thing. Like, agents arguing from different angles, poking holes, then trying to agree on what's actually supported.
I ran it on two things:
1.) a stock idea where I wanted filings + market data pulled in
2.) a "should we build this?" product decision
For context one of the tools I messed around in was Vettis. Its got the "debate/consensus thing that I'm curious about more than the specific brand. Mixed experience, honestly. The stock mode was genuinely useful because it surfaced the bear case in a way that felt less like doomposting and more like "here are the specific assumptions you're betting on." But I tried their strategic mode earlier, maybe an older version, and it felt kinda, weird? Like it wanted to be helpful more than it wanted to be right. Not sure if that's improved now.
The part I didn't expect: watching the bull case get torn apart by the bear case and then seeing what survived when they had to converge. It wasn't the answer, it was more "heres what would change the decision," which is way closer to how I actually think when real money/time is involved.
Curious how other people think about this:
\-Do you trust AI more when it's arguing with itself (and citing stuff), or does it just add noise with extra steps?
\-For investing: what do you actually want to see in an output that's useful? SEC highlights, comps, catalysts, risk register, downside scenarios, "what would invalidate the thesis"?
\-For strategy/consulting-type questions: what's actually actionable for you; frameworks, decision tree, assumptions list, experiment plan?
Also if you've tried anything like this, what made you keep or drop it?