Summary

Welcome and Introduction

Presentation: Steve Jones on Custom AI Applications

Core Concept: Separating AI Models from Product Design

SlowSpeak: Custom Voice Assistant

Student-Safe AI System

Key Takeaways from Steve's Talk

Presentation: Quanah Parker on Local vs Cloud AI

When to Use Local AI

When to Use Cloud AI

Important Caveats

Key Discussion Topics

Voice AI and Personalization

AI in Education

Data Sovereignty and Privacy

Practical Development Insights

Community Announcements

Sponsors and Organizers

Notes

Transcript

Let's get this party started. Come on in. You're welcome to grab a drink. There's beer and wine in the downstairs cooler there. And there's a bunch of wonderful cheese from the natural pastures. You are at the Cobox Valley AI Meetup number one. Now, that might seem a little bit weird because some of us got together before at meetup number zero. This is here for meetup number zero. All right, and so who's here for the first time then?

Half and half, that's so cool, welcome. So meetup number zero was a month ago and we came here, my name's Chris, I'm from the BC AI Ecosystem Association, which is a non-profit society of AI professionals here in British Columbia.

We have events mostly in the lower mainland, but in some other places as well, including Squamish and now here. And that's because we have a couple members of our non-profit, including Lourdes Grant, who's hugging Mayor Bob over there, Robert Davidson, The Wax Valley members and then my daughter Sarah is in the back row my kids live here And I've been coming to go long for about 30 years. I got married in the fall old church over there, 1998. A lifetime ago.

So yeah, we came here a month ago to With boards and Robert to say hey you guys want to start doing one thing you know and You know I was expecting 12 or 15 people to show up and 65 people showed up and we have a great session We did mostly community announcements open mic style thing which we're gonna do a little bit of tonight and a lot of people kind of talk and we got a lot of great conversation even like I wouldn't say debate, but these things aren't simple, you know?

I'm up here. I'm saying hi as the mayor and welcoming everybody, but one of the reasons I'm here as well is most of my career has been defined by technology, doing websites like MLS.ca back in '95, doing video websites for independent film in Vancouver for about five years, and then I started the internet department for Shaw Cable RedNet for about six years before retiring to the Comox Valley. Unexpectedly, as our car broke down up at Mount Washington while we were skiing, and We walked around and saw an absolutely amazing place. And I would say my wife and I, we met in high school planting trees for a riparian restoration project.

My friend Erica Cohen just walked in from Real Youth on Cortez Island and that made me realize that we should do a quick take of where everyone's from. So who here's from Courtney?

I'm going to call that just over half Comox.

Another bit from Denman Hornby. Cortez? Where else? Anyone else? Alright! Where else? Campbell River?

Someone wants to spread the word back there because we had only one or two people from the Oceana River last time and now there's a bunch of you. Anywhere else?

Good news. Good news. Good news. Yeah, who else? Anyone else? So welcome, Danco. Okay, thanks. So I got a couple little announcements here.

We launched another new subgroup last night, the life sciences chapter of our thing in Vancouver. We had like about 20 people there. There was several physicians.

There was a couple geneticists, a guy with three PhDs.

There's another one starting on May 22nd.

We do a responsible AI professional certification for those who are working in governance, ethics, or rolling out AI inside corporations.

Our next cohort starts on May 22nd as well. It's a six-week program. It's awesome. It's letters you can add to your professional resume on LinkedIn, but it's also a great certification program that shows that you're committed to ethical deployment, responsible use of AI.

UNESCO framework and the IEEE framework.

So we teach some like really bonafide emergent AI ethics and responsible use frameworks.

I didn't get a chance to make the slides for the sponsors yet, but we have a couple sponsors. So, Natural Pastures, come on up. And Colin and Mel from Tree.io, you wanna come up for a sec? So I got a quick story to tell on our way out, but when we came here, the reason I called it Meetup Zero is because I didn't want to come here and be like, yo, everyone, this is what we're doing.

And very much the room responded in a second way, including these two folks who joined up as sponsors right away. So Natural Passion's been bringing cheese, gifts for the speakers and for us. And then Colin and Mel from Tree AI also came on as a sponsor, and he's going to be helping out doing some organization with the crew as well. So, yeah. You're welcome. Just say hi, say what's up. Oh, for sure, yeah, I mean, I guess, I just want to support the initiative and I mean, I moved out here from Vancouver.

Yeah, what can I say about that? I guess... Could maybe preface it with, I'm a software engineer. profession and so a lot of these like the LLMs, the Gen-Tec systems, like I know how they work. I built them from scratch. I fundamentally understand the mathematics behind them. So there's no magic there for me. So a lot of the skepticism was just that they, for the longest time, were kind of like semi-useful tools in any of the work I was doing.

And now they've gotten to a point where I legitimately use them as productive tools in my actual work work. Which is... I didn't expect, to be honest. I figured eventually it could probably happen. But it definitely came along a bit quicker than I was expecting.

Yeah. Yeah, I mean, I know there's a lot of technical people here in the room. And also, it's also never been a better time to be a non-technical person. Because via these agentic coding tools that we're talking about, if you can speak your ideas out in well-formed ways, if you really know what your user wants and you can articulate that with your voice or your words or your fingers through typing, you can build all sorts of software.

I like what you just said about as people who were here at the last meeting is what was in my mind I'm able to create as a... I was one of the first kids who took computer science in grade 8, turning 55 and it was like we used to get a textbook on the big computer and then I became a graphic designer where we started out by tracing stuff out with pencils and then we started developing websites and then we had Dreamweaver and we could just drop and drag and now I can I run my HR department as a manager at a cheese company.

I can help farmers by doing grants with ChatGPT because I've got to answer some really stupid questions and look up data to justify the money we've asked for from the government and support other people outside of the cheese. The cheese is what we sell, but really where it comes from is the land and the farm, and we have to fight. We fight battles with some crazy people to let food trucks, milk trucks, just come to our back door.

So I'm just gonna jump into it with an analogy. We've all had chocolate-covered strawberries, and they're delicious, and they're unhealthy. And I think we can agree that it would be ridiculous to say that because that dessert is unhealthy, the strawberries are bad for you. But I kind of feel like that's some of the conversation that's happening in society around data right now. Should I go to the next slide?

I led a product team where we used behavioral psychology to help people save energy and reduce their utility bills. There's other applications that help people stay healthy or learn new languages. There's a lot of examples of cases where this has caused harm. And so it's something we need to be aware of. On the next slide now, we've got this mashup, right? So we've got these great, incredible frontier AI models from companies like OpenAI, the GPT series, or the Opus series, or Anthropic. And they're like the engines of a car. They can't do anything on their own. They're just models.

And just like you can use a car, the same engine in a sports car or a pickup truck, You can use the same model to deliver a large variety of different product experiences.

I do want to acknowledge it's a little messy. It's hard to unravel this. It's the reason why a lot of these conversations we're having in the mainstream media and everything get a little complicated. There was a story in CBC today where they were talking about how OpenAI used some private data from Canadians in training the model. But they didn't say in training GPT, they said in training chat GPT. It's kind of this misnomer, right?

Chat GPT is just the product name. The model is what was trained. Oh yeah, next slide here. So as we think about trying to unravel this, we really need to be thoughtful about where is the engagement layer, where is the AI? And it's complicated because some of the psychological engagement mechanisms are embedded in models as a result of what they were trained on in reinforcement learning. The model might have a bias towards being flattering users.

And some of the traditional engagement mechanisms, which we've been using in software for 20 years without AI, like personalization, supercharged with AI. So as we're building our own products, we just need to be thoughtful about that and think about our own products and ask, is this element of the product delivering value to the user through intelligence, or is it manipulating the user? And we can think about that as we build our own things. So the next slide, the exciting thing now is anyone can build software. So one of the first things that happened with AI is we got all these coding agents.

You might have heard of Cloud Code, Codex, Replit, Lovable. So basically, in the same way that you can say something to it in English and it'll convert it to French, you can say something in English and it'll convert it to programming language. So anyone can now write programming, The other thing that's really interesting is these companies have spent so much money training these frontier models, decided that they weren't going to lock them up. They were going to let people access them directly.

So you can get access to the GPT model through an API instead of just having to use ChatGPT And so that gives us a ton of power, which is quite exciting.

So on the next slide, though, there's one final caveat before we get into actually talking about applications, and that is There is a trade-off here. The companies that are building these end-user applications are also investing a lot of money and a lot of time in putting in safety systems, especially around topics like legal advice, medical help, mental health crises. I believe that we can build our own user interfaces that in some ways are safer.

Maybe I've got a meeting tomorrow with someone and I just want to get some background information on their company. Talking on my phone, get the answer back when I'm out for a bike ride or something. But the AI voice agents out there right now, getting very heavily into psychopathy, or not psychopathy, anthropomorphization, right? They're relying on this trick of making us think we're talking to a real human. And our subconscious is powerful, right? We go to a horror movie in the theater, and our heart rate goes up. We know the movie's fake.

What's the impact of talking to these things that seem really, really human? It's going to impact your psyche. We're kind of part of this grand experiment right now. I don't want to participate in that experiment. I just want to get an answer without having to type in the keyword. So it puts a thought into, well, what could we do to build an interface that would give us what we want with what we don't want?

In my career, when I've tried to build differentiated product experiences, I've liked using the blue ocean strategy. You basically say, what are the characteristics of products that are out there right now? What do I want to increase or add? What do I want to reduce or eliminate? And so, in the case of these voice agents, I want to eliminate real-time communication because that makes it seem too, too real. I want to eliminate imperfections.

We went from computer voices sounding like a robot to then sounding really natural to now they're adding imperfections. They're adding in stumbles and giggles and stuff to make it seem like imperfectly human.

I don't want that. I just want to get my answer. I don't want it to have a persona. I don't want it to talk in first person. I don't want my phone to know my hopes and dreams. I just want it to tell me what weather is. And I don't want it to develop a form. I don't want it to use sympathy. I want the answers to be accurate. I want them to be in-depth. And I want them to be verified with real sources.

Now, a lot of the voice agents, you'll notice if you go from using like chat GPT on extended mode and then you try it on the interactive voice mode, you're not getting as good of an answer. And they've taken shortcuts so that they can be real-time. They can do all those things in the left column that I don't actually care about. So that's what I actually want out of my voice agent. I want it to be intellectually honest.

If I ask a question and there's a flawed assumption, to identify that not to be polite and go along. And I want playback controls. It's really interesting to me that when you use the interactive voice mode, you can't do something as simple as pause or rewind or change the playback speed. And I think part of the reason is that those snap you out of this idea that you might be talking to a sentient being.

I think like, at the end of the day, I'm still sending my information to OpenAI, you know, through the API. So privacy would be a different kind of work. I think it's an important thing to consider. And, you know, definitely as we start talking about using open source models, there's other solutions. That's not what this addresses. This is more just about making it feel less human. So, you know, I had some conversations with Claude Coe, basically explained in plain English what I wanted to achieve. Start simple, just a basic voice app, and then over the course of a number of days, added on all these other features.

I call it slow speak. The idea is I really want to celebrate the fact it's slow. It's not a vibe, it's a feature. That's what I was going for. I've got an example. You won't be able to read it, but I asked a question. I used voice and asked a question about commercial applications and quantum computing. It took like 5-10 minutes to put together a response. I've got it set up to use O3D research. It's like scouring the web. It's finding really good resources. When the answer comes back, it's 17 or 16 minutes long.

So, I think that's a really good thing. I'm going to quickly cover another case here. If we go to the next slide. And that's AI for students. This is an area that's getting a lot of debate. I think that, you know, There is a lot of people that say we shouldn't have AI in the school system and I understand that but the point school systems to prepare students for what they're going to see in the real world. And if we don't expose students to AI at all, they're going to graduate and have no idea what to do.

So we need to do it. But I think the current applications are absolutely not good. I don't think every student should be using chat GPT all day. That's a disaster. So, I kind of went through a similar exercise. If we go to the next slide here, basically said, okay, what do we want? I don't want it to be interactive. Let's slow that communication down. We'll email the system. And they're not emailing. I'm not going to give it a cute name. It's the student-safe system.

You submit a query to this computer system. It's going to get you back some information. It's just like a more powerful version of Google. I don't want unlimited interactions. That's what leads students to dopamine-seeking behavior. That's what leads them to cognitive offloading. just asking every question of an AI without thinking for themselves. So why don't we just set it so they can only ask one question a day?

So they're only gonna ask a question of something they can't figure out on their own, and they're gonna be really thoughtful about forming that question. That's a good skill to have as well. you know, by creating that scarcity. Safety risk. Now, I talked about the fact that it's really hard for us to build guardrails, but there is something we can do. We can put a human in the loop. We can say, so I put this system up so that whenever the answer comes back, it goes to the parent or the teacher. They review it, and then they approve it before it goes to the student.

This question was, if I poured food coloring in a river in Alberta, where will it end up? And it's like, well, these areas will end up in the Arctic Ocean. These areas will end up in the Gulf of Mexico. There's some areas that will go to the Atlantic Ocean. Like, hey, here's some recent research about glacier loss in Alberta and how that's changing river flows. And by the way, if you're really interested in rivers, you might be interested in exploring some of these other topics.

Kind of like personalized tutor services to an extent. Both of these products are available on GitHub, they're open source so anyone can play around with them. If you're not familiar with how to deploy software, spend half an hour on YouTube, get up and running with Cloud Code, you can just point Cloud Code at the repository, there's a good readme file in there, and it'll tell you what you need to do. So there's quite a bit you can do there.

You know, right now, we've got these AI models, like the strawberries, they're getting doused in chocolate and fed to us, and that's not healthy. We don't need to do that. We have the power now. I think it's really incumbent upon everyone right now to learn a bit more about behavioral psychology so we can recognize when companies are playing tricks on us and tapping into that, and also so we have the power to build our own experiences that will allow us to make the most of AI. on our own germs.

So I don't, I mean, I was just thinking about it as like a thought experiment, right? Like what could be useful? In the project, I I set it up, I think, I'll have to double check the source code, but I think it's for like... five or eight to like 18 and then depending on what you when you set up the program you have to the age of your child and then it'll give back an age appropriate response so you know i do think we like i really do think we need to be really careful about how we expose especially young kids to ai i think this idea that you've got this Kids transpose a lot to stuffed animals. and all of this, and having them transpose so much.

Your comments around the personalization or the human-like behavior of the AI are interesting to me, 'cause I actually agree with you. That's what I would prefer. I don't want Jarvis, I want Wikipedia kind of thing that is verbal. But to play devil's advocate a little bit on my own concept there, How much of that is risk, like a measured, quantifiable, like these things having personalized experiences is a thing we know is bad? And how much of it is maybe fear? Because I'm thinking about the human tendency to personify the things upon which we rely, going back to motorcycles, to swords and shields, to bows and arrows, etc.

Obviously, the big difference being that the swords and shields and motorcycles don't talk back and love you back kind of thing. I know a lot of motorcyclists wish they would. But the end result is that you don't have a whole bunch of motorcyclists Because obviously the engagement is a little bit different. So do you actually know how deep this risk is, and this dips into the AI psychosis a little bit.

So when I built this, actually my main response, I wasn't actually as worried about the psychology aspects of it.

I was more annoyed that they were giving me poor answers. in order to make it more real-time. And I said, well, actually what I want is better answers. And then the fact that it became less human-like just fell out of that. And I kind of looped around and said, actually, that's actually a pretty good benefit as well. So yeah, I think it's something that will require more research.

And I think I just want to remind people who maybe aren't as familiar with AI that even if you're not building your own voice agents and then customizing them to not have certain kinds of personalities or ways of being that you don't like, you can also customize your system settings in the foundation models. So there's a lot of things you can do there so that it isn't sycophantic. It speaks in a certain way, it has a certain tone or personality. So it's important just to remember that you can do that. But if you want to build your own kind of voice agent, you can do that if you can teach yourself how to do that just using AI, to learn about AI.

I personally love being able to jam with a voice agent about ideas. I know I'm not going to get the super deep research version of it.

I know that there's limitations there, but that's not what I'm wanting. So I guess I just wanted to add a different perspective there. It really depends what you're trying to get out of it. Yeah, to be clear, I haven't deleted the Chuggy D one. When I need quick answers or sometimes when I want something a little back and forth and I'm kind of riffing, the custom one I know is not as good at that. Sometimes, like on some topics, I don't mind that Chad GPD has developed familiarity with me over time about certain things.

Yeah, I think you guys are both making kind of a similar point and it's worth even pulling it out a little bit more. Like there's certain things that like real-time voice agents were like magical. Like, you know, navigating through a phone tree at Telus, you want to be talking to an AI real-time thing that's like, I'm just here to pay my bill and here's who I am and it just goes and does all this. and you have to say this word or that word, it just understands natural language.

There's tons of areas you can see it being super awesome, and then other areas where you can see it's like, yeah, maybe our students shouldn't be bonding with their school tutor bot thing, you know, or something like that. Thanks.

yet they're future in a way so well they are. So I was like, how do you, How do you convince the regulators to implement the system? How would it look like? That's where my brain went. convince the BC school system to adopt it, which person do you need to talk to to make them believe that the strawberry without chocolate is actually the good option?

Especially in BC, and tragically in BC, because of what happened in the Cumberland Ridge, and because of evidence that there was a student who had conversations that had been flagged as being dangerous, and the decision had been made in the company to not raise them. So I think there's an openness to the conversations, but as far as how to make the government do anything,Good luck, I don't know. I mean, I think, well, I think, but I do think, you know, at a community level, just the more that we can start to have these conversations and raise them up, and I think, you know, really helping the government understand that it's not like they have to decide yes or no against these few multi-billion dollar companies.

There are all these other paths, so I think it's gotta be grassroots to a certain extent through every single school district and people just kind of bubbling up and saying, you know what, There's other options and we can do this in ways that are safe and appropriate because if we just start saying there can't be any AI, that's going to last for about two years and then we're going to see all these headlines, students graduating out of BC have no chance against students graduating out of Alberta and other jurisdictions trying to get into these leading universities.

We need to empower them and we need to do it safely. I don't have any other great advice except just to say never underestimate The power of your individual voice at the ground level to influence. It's a totally different subject, but I do a lot of that. I'm pretty passionate about BC Parks. I wrote a post about BC Parks last week with relation to some stuff at Search and Rescue and how they weren't updating their trail conditions reports. It went a little viral on the weekend.

If you go to the Squamish Chief today, just before I go out here, they published a big article on all of this stuff. So if you're concerned about this, just talk about it. You never know when stuff is going to go viral and get picked up by the media. The government to respond, you just gotta keep hammering on it, and if enough people do that, change will happen.

This is why I was floating your boat in the introduction. He follows these things, and then he writes about them in an educated way, putting forth data and fresh perspectives, and then other journalists help cherry pick his ideas, and it makes it into the mainstream culture.

It's quite exciting to see what people are building. So thanks, thanks very much. Cool man. I really appreciate it. Thanks for having me. We've been running the bank. We've been doing it for 28 months in a row and I'm assuming this one will go long and strong as well and this will always be the first. So thank you Steve. Thanks. Just due to time and the way things are going, we're just gonna keep on rocking on through.

In GAPS, it's got this little package in there that you'll love when it runs in your computer.

Anyway, the point is that running local AI is not the panacea of everything is all wonderful and it's running on my local hardware and it's great and wonderful. You have to be aware of what it is that you're doing and keeping it at arm's length. So there are great things that come along with it and there are also great challenges that come along with it.

Anyway, yeah, the future is hybrid, like choosing where to use local, where to use cloud, when it matters, when it's good, when you need the power of frontier models. The data that you're using is not sensitive and Yeah, it's fast. It's really fast and really good. Having all of those There's some really amazing companies that are doing some really amazing things and having access to that cloud is pretty cool.

When you might want to choose local AI, When your data is sensitive, when you've got to actually, you're concerned about privacy, you're concerned about making sure that you understand where it's going and keeping control of it. It's not as powerful as what the Frontier models are. As much as you'd like it to believe that the Frontier models, the open source models are just about there, they're not.

they're not as good as what the open source or the frontier models are. The frontier models, even though they might be working off the same open crawl of the internet and the same data source, The Frontier models have Millions of users using them daily. giving them feedback of what is good and what is poor. And the open source models just don't have that other pipeline of reinforcement learning going into them.

For a lot of the capabilities of AI to get rolled out in classrooms across BC, it's going to have to be the local AI situation where no information leaves the classroom, yet you gain 80 or 90% of the capabilities of this infinite intelligence inside the classroom to be run right there.

Yeah, there's... Aspects of it, I mean it's, One of the ones is where the data and the compute resides. You are... exposed to the laws of the land there. So if you have your data in the States, you're exposed to all US laws. And technically, if you're doing that still in Canada and it's an American company, you're still exposed to the US laws. But if you do it in your basement, It's your compute and your So, yes, no, maybe. There's aspects of it that you get to still have the, if you trust Azure and if you trust that, then yes, your data is still sovereign to you, but the US government can still compel Azure to give up that data.

Yeah, for people who aren't in the developer space, there's a really good service called RIPEN, R-I-I-P-E-N, and I've been working with student developers through RIPEN, and I get them from different universities and colleges throughout Canada, and they're super talented, super nice kids, they need the experience. I'm on my seventh student now, and it's been, I can only take my software so far, I need expert help, but I'm not going to pay a lot of money for that.

Yeah, and one of the founders is from Campbell River. So it's R-I-I-P-E-N, super helpful.

You also said something really important right there about like taking your own software as far as you can using these agentic tools and then bringing in the experts. Like before you would have gone to a developer with barely any frickin' idea of what you actually wanted and your ability to communicate what you think you want to them was like pretty close to zero. I guarantee they're annoyed with us That's right, you build your own first thing, you take that to your developer, they actually give you something that works and that you want, and it costs you like a third of the price. That was a good tip.

I haven't heard about that, but I might not have gotten the memo yet. Yeah, so I think just everybody's wearing the data sovereignty thing. I've done a lot of software SaaS processes, RFPs, and stuff like that. And the typical thing would be that we would have a rider in those RFPs that required the data from our institutions to be stored on Canadian servers only, because as soon as it got across the board in the United States, it triggered an access thing.

And I noticed that theme has come up a little bit. I understand that's changing or has changed very recently as a result of a cloud act, where if it's an American company, physically located, the US government has access to it.

What I heard you just say, many companies and non-profits in this room are likely a non-compliance of Canadian personal privacy collection laws because we're sending our things to other places and not using it with the intention that we collected it in. With AI and these classes, it's fucking almost impossible to be in compliance. As our non-profit, we have to be very, very careful. Not that we're leaking your data, that we don't break the law.

Well, and this latest thing is a regulatory change that hasn't even been captured by a non-profit. public service RFP processes yet, so folks are still going on the whole, like, as long as you keep this stuff on Canadian servers, we're okay, but they're technically not, or will not be very, very soon, so it's just a caution to the community if you're involved in that kind of an organization to hang a flag on how you're doing your data sovereignty work as a leader.

That's all. I'll throw one more thing in there, which is supposedly our big Canadian hero in this case is Cohere, and it turns out Cohere's data centers are often mostly in Buffalo, and then a lot of that $160 million the federal government gave them for sovereign AIs heading on down with New York.

you Yeah, exactly. Anyone else have a community announcement? Yeah, Connor. Thanks. Can you give me a mic runner for the next one, please? Anyone who's running Linux at home, there is a copy fail. Be aware of the updates, go do the update. Copy fail is a zero day exploit. two days old, so pay attention. That was discovered by a company in.

Wow. Well done. Did you get them to talk here next month? Would you invite them to talk here next month? I don't think that is--Come on, come on. Not about that in particular, but those are the type of people. Dude, finding Linux kernel bugs zero day from Cumberland is--They've never been more famous. They're just like a tiny company and all of a sudden--That's incredible. BCAI Ecosystem is our nonprofit. We have 300 members. 37.5% of them are females, which I'm very proud of. In the tech space, that is astronomically unheard of.

It makes other events in Vancouver free. It makes those trainings and workshops half off. A lot of cool benefits. But this is what keeps us going. Please check it out. He's a month from now. Hey, Lord, let's get up here real quick. You've been so quiet tonight. This is the co-organizer, Laura. She's gonna be taking a couple months off for the summer, but coming back in the fall. Right, Lardis? Coming back in the fall?

So I think I just wanted to acknowledge all the women who are here just to let you know we're doing these statistics and 50% of who attended today were women, so yay. And of course for all of them, I'm saying that because it's Mother's Day on Sunday, so thank you all for coming. Of course, huge thank you to Rachel over there who's our backstage. Thank you, Rachel. And our speakers, come on in, Steve and Gwena.