<aside>

Working note: This recap now includes the missing local AI stack segment from Quanah Parker's talk, plus the closing discussion and hallway follow up notes from the supplied transcript.

</aside>

Event snapshot

Field Details
Event CV + AI Community Meetup #1
Date Wednesday, May 6, 2026
Time 5:30 PM to 7:30 PM
Venue Florence Filberg Centre, Courtenay, BC
Presented by BC + AI Events
Hosted by Kris Krüg and Lourdes Gant
Attendance signal 45 going, roughly half returning and half first time attendees
Community signal 50% women attendees, according to Lourdes Gant

One line recap

Comox Valley AI moved from discovery into practice, with local people wrestling honestly with AI interfaces, data sovereignty, education, community safety, creative work, and the question underneath all of it, how to stay human while using powerful tools.

What happened

The second gathering, and the first official Meetup #1, showed that Comox Valley AI is no longer just an experiment imported from Vancouver. Meetup #0 proved the room existed. Meetup #1 proved the room had shape, local leadership, sponsors, return energy, and its own questions.

Kris opened by naming the space BC + AI is trying to hold, not booster hype, not doomer resignation, but a place where people can be curious and critical at the same time. The group heard from Mayor Robert Wells, sponsors Natural Pastures and Tree.io, Steve Jones on custom AI interfaces, and Quanah Parker on the practical tradeoffs between local and cloud AI.

The conversation kept coming back to the same center: AI is not one thing. The model is not the product. The tool is not the interface. The data centre is not the jurisdictional answer. The local chapter is not just a satellite of Vancouver. It is its own living node in the BC + AI ecosystem.

Key themes

1. The valley wants nuance, not slogans

The opening frame carried forward from Meetup #0. People are tired of being forced into two bad choices. One side says AI is inevitable and everybody needs to get on board. The other side points to power, water, stolen creative work, bias, racism, discrimination, privacy and labour disruption. Both sides are seeing something real. Neither side gives regular people a complete way to live, work, parent, teach, build, or govern right now.

2. Product design is the dangerous layer

Steve Jones made the cleanest distinction of the night with the chocolate covered strawberries analogy. The strawberry is the frontier model. The chocolate is the addictive, anthropomorphic, attention harvesting product layer wrapped around it. The important question is not simply whether AI is good or bad. The important question is whether an interface is delivering intelligence to the user or manipulating the user.

3. Anyone can now build the wrapper

Steve showed how Claude Code and direct model APIs let non traditional builders create custom AI applications in days. SlowSpeak, his custom voice interface, rejects real time intimacy in favour of deeper, slower, sourced answers with playback controls. His student safe AI concept slows the interaction down even more, uses email instead of chat, limits students to one question a day, and puts a parent or teacher between the model and the child.

4. Education needs design, not denial

The student safe AI discussion echoed Meetup #0, where education had already surfaced as the battleground. Banning AI does not prepare students for the world they are entering. Unlimited chat with sycophantic tools is also a disaster. The emerging middle path is intentional constraint, age appropriate responses, human review, citation, curiosity expansion, and local control where sensitive student data is involved.

5. Local AI is about risk profile, not purity

Quanah Parker brought the conversation down to infrastructure. Cloud AI is fast, powerful, and useful when the data is not sensitive. Local AI matters when privacy, jurisdiction, classroom data, legal records, medical data, or organizational compliance matter. But local AI is not a magical safety blanket. It takes maintenance, skill, system thinking, and clear threat models.