Skip to main content

New: Quorum and Axios Partnership — Bring the hyper-relevant industry news from Axios Pro into your legislative tracking workflow.

Learn More
WP_Query Object ( [query] => Array ( [name] => building-ai-across-america [post_type] => resources [resource-type] => blog ) [query_vars] => Array ( [name] => building-ai-across-america [post_type] => resources [resource-type] => blog [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [paged] => 0 [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post__not_in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 10 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [queried_object] => WP_Post Object ( [ID] => 7655 [post_author] => 12 [post_date] => 2022-10-11 20:41:15 [post_date_gmt] => 2022-10-11 20:41:15 [post_content] => [embed]https://youtu.be/1Jj8YKvQ960[/embed] Austin Carson (02:03): Um, but anyway, so I appreciate you. I appreciate everybody being here. I got a couple of things wound up for you, but I really would like for this to be as interactive as possible. Um, my monologue can get pretty boring after about five or 10 minutes, so I'd like to make sure this is tuned toward y'all. And I live in this world, so it's easy for me to baseball, but my goal here is to start with, you know, what's the background for me and my organization and why do I have any bearing on this? And then to some kind of general tenants that I have for communicating with folks, especially on emerging technologies. Uh, and then I'll move into kind of an instance, um, of what we've worked on for kind of our project AI Across America, which is where the building AI across America name comes from and how, you know, that tries to tie together the things I'm describing to you towards the end of, you know, a plan we identified for building up AI capacity and innovation ecosystems. Austin Carson (03:03): So very quickly, like I said, Austin Carson, I'm the founder and president of C ai, um, and also work with a lot of other organizations on kind of the same front, especially around congressional education and c ai and organization I founded about a year ago. So we're right in that kind of still peak, everything's crazy time, uh, you know, around the idea of working directly with communities to help architect a plan for them to build an AI or an innovation ecosystem that supports AI research development, application testing, education, and workforce development. So I know that sounds like a really big bite to take outta something. Uh, and I think the important thing to tag on there is, you know, kind of the architecting and the ecosystem component because there are a ton of really good people that are working on each of these individual components for their respective circumstances in their communities. Austin Carson (04:00): And so to help bring those together and then show how they function in different environments and help provide kind of a blueprint or a plan for public officials, you know, NGOs, private sector folks to participate in that is, um, kind of lifts up that entire, that entire boat and gives you a way to help different people accomplish their objectives in kind of a super win-win scenario. Before that, I worked at Nvidia for three and a half years, which if you're not familiar, they make, um, the computing platform that powers something. I mean, it was 92% last time I checked a year or two ago, but it powers somewhere between probably 80 and 90% of all artificial intelligence training. Um, and then I, you know, really worked closely with their technical staff and their kind of bigger picture business staff to stand up the government affairs department, uh, and think through, you know, what's the primary, you know, what's the primary added value for organization as a kind of big ecosystem player in ai? Austin Carson (05:04): And that is educating, right? Trying to give that experience a little bit more broadly. Uh, and then about, I would say probably a year before I left, maybe a year and a half, we started working on a project that really pushed aside a lot of the kind of misconceptions about the inevitability of AI development being or not being in certain environments, right? It was a big full spectrum investment and the computing and the data and the education at in Florida, the University of Florida, and then a lot of effort locally and with national, uh, like NSF investments to help mo like adjust the curriculum they created, move it to other parts of the university ecosystem, in particular for, you know, folks at, um, minority-serving institutions, historically black colleges and universities, community colleges, uh, creative institutions. I mean, a pretty broad spectrum of folks who, again, as a general matter, there's a strong negativity approaching nihilism of, of kind of having this broad inclusion and representation. Austin Carson (06:10): And I loved it. <laugh>, obviously since I jumped off of a gravy train and decided to start a nonprofit, which wouldn't advise it. Um, and so, you know, really trying to expand that out over time and think about how it's valuable and actionable for kind of the broad group of stakeholders I was engaging with on a daily basis. And kind of tying that together with my knowledge from, you know, educating folks is how I got to kind of the place I am right now. And then before that, I worked for another nonprofit and was executive director for a bit TechFreedom. And then before that, I worked on the Hill for like six and a half years. And my last boss was, uh, Congressman McCall, who is now the vice chair and soon-to-be chair of the Congressional Artificial Intelligence Caucus. So that's kind of my thread drawn all the way through this thing. Um, next I'll get to some kind of general principles of how I thought about this, and some quick advice before that I will kind of expand on and give a reference point with 90 seconds of your life to watch a video. So apologize for making you do it, but I think it really ties stuff together in a cleaner way than I just did. Um, and the methodology behind it is again tuned toward what I'm about to kind of go through Austin Carson (07:22): Here. AI isn't what you see in the movies, it's not, Austin Carson (07:27): All right, here we go. [Video Speaker] (07:28): Magic or some alien force. Artificial intelligence is a reflection of people created by people. It already powers the devices and services we use every day. And with the pace of research, we'll go from these small systems to complex technology that can change everything. But right now we have a problem. AI is moving fast in becoming so costly that public investments and small players can't keep up because AI is a reflection of the people involved. We need to ensure that tech won't harm people that don't look, act, or sound like those who made it the good news. There's an answer. By investing locally, we can grow a diverse generation of AI, dreamers, and creators from across the country. America is ready for action and seed AI is making it easier for everyone to work together. We're curating an expert network of thinkers and doers who will help refine our work and support new initiatives. We're creating policy perspectives and detailed actionable guidance for decision-makers around initiatives that contribute to AI access and more local capacity. Finally, we're helping to build the programs themselves, working with communities to bring everything together. Seed AI is building a more inclusive representative future where we'll solve problems and improve lives in ways we can't get Imagine, learn more about how we'll [email protected]. Austin Carson (08:57): All right, well, hopefully, that wasn't too onerous, but you know what I'll jump to in a second about some of the communication and packaging of this stuff. Um, I'd like your feedback as we move on about, you know, what you like about it, dislike or have questions, um, and you know, how you take those thoughts in relationship to anything that I shared display from here on out. So going to, you know, how do you engage with Congress? How do we get from the kind of, you know, me working for Congressman McCaul just kind of randomly thinking about these things, and you know how I had to balance equities all the way up that ladder, right? The first step of it is you have to establish a baseline of knowledge in terms of where, like, where people are and where you have to get them. Austin Carson (09:47): And then you have to establish the incentives for that knowledge, right? And they and they range kind of the gamut. And for congressional staff, there certainly are, and for other government officials and honestly for executives at, at private companies, um, there are like kind of a clear incentive system in their mind that ties back to, you know, in a way that you may not necessarily comprehend. But each of them, especially at kind of this executive or operational, like high level or like decision making operational level, has that heuristic device, right? And there are commonalities that rung between them. And those are ultimately the best way for you to approach a complex topic, right? So again, start with their motivation. If you're talking about congressional staff, I would argue that in this, you know, or even I think folks, maybe a percentage of the folks in perhaps policy positions and executive and possibly even private, I haven't considered as much, but I think especially for congressional staff, you kind of cut into a couple different, uh, like operating frameworks depending upon for emerging technology work, kind of depending on where they sit and where their district is, right? Austin Carson (11:02): So the first is, you know, your true nerds, the people that like really love this stuff, really want to get into it. And any expert you bring around, they'll be excited about it as you are. And I mean, the only way to really work with that category in the first place is to also really feel that way, right? And be really excited about where you're gonna present to them, or at least understand what people that feel like that want, you know, and how they internalize it. But ultimately speaking with that group, it's important to keep in mind that they still don't actually probably have deep knowledge on the topic, right? Even if they love it, they've probably still been through like a couple of hearing prep, some meetings, and then, you know, keeping up with it along with the other 12 things they're keeping up with, right? Austin Carson (11:44): So it's still important to remember, and this is a mistake I made a lot before I remember that this is all I think about and nobody else, you know, especially the esoterica of it, and I'll get to that in a second, but it's hard to remember that. And then for the second group, you've got folks that are, you know, interested but not to the point where they'll devote that kind of fraction of their mind, that congressional staff for any have, for any focused work, you know, and they like to learn, but I think they're like, eyes are bigger than their stomach if that makes sense. And also a life I've lived. Um, and so, you know, there's an interest in, you know, enough, like a functional knowledge of the technology at least to kind of add to that heuristic device. And then there's an interest in how it's beneficial, right? Austin Carson (12:27): How they can add it to something they're working on for, you know, their bosses ambitions, the district ambitions, kind of what their ambitions are for their career. And you can see how this would cut in different directions for the other group. So I think it is especially strong given the divided attention and like strong motivators present in those legislative offices and some executive offices. Um, and then the final group is like, whatever, you know, I think it's kind of like a, if you bring a compelling argument that has a clear benefit for, again, those three stated boss office self ambition, then I think that you can still kind of make that education. But it's important to know that if you kind of amp up the line and try to bring them your director of research to dig into some stuff, it's not necessary and possibly not even helpful. Austin Carson (13:13): And I would wind back to say for all of these groups, the packaging is super important in addition to the understanding of incentives, right? You have to consider the timeframe, the level of interest, and then again, to the list device, it's their district, their boss themselves. And then for the overall operation, you can add into it, you know, what does the committee want? What does leadership want, right? What are the other people in their state doing? What's happening in their state, right? And then again, what are they, what are they bringing home for any of those folks? Um, so, you know, kind of moving a little bit beyond that, what's, what are mechanisms for action, right? And in fact, let me, I'll go to some kind of general principles, and then if anybody has questions, drop 'em in the chat and I'll try to answer those. Austin Carson (14:04): But otherwise, I'll happily move on. Um, so, you know, the first thing I'll say is, on one hand, AI and, you know, with the deep learning kind of revolution, right? But modern ai, while it feels like it's been moving for a while, it's moving fast, it's still super nascent and it's super, it's super nascent in the sense that, you know, we're on the like breakthrough point to some seemly, absurdly advanced technology that we barely understand, to be honest, right? And that the policies that we've really gone through and the industry that we've addressed in the past is about to fundamentally change in some ways by the advent of, um, I mean in particular large transformer models and, you know, more advanced reinforcement learning. And we can, you know, know, have some resources drop for technology or for education if anybody wants to dig deeper. Austin Carson (14:56): But we are like moving past this inflection point, and we are at Genesis, you know, uh, and to be honest, for that reason, I'm really thankful that you're trying to learn about this or you're interested in learning about it because it is very, very important. Uh, critically, I cannot overstate this. It's not just that I love my idea that I went and jumped off a cliff for starting seed ai. It's because I am firmly convinced that it is critical that we Saturday morning cartoons work together and get this right and do our best. You know, we literally need to, I, I'm not gonna be so histrionic to say the survive is a species, but it will not be enjoyable if we screw this one up. Um, and the second thing I'll say I'm finding is that because you're oftentimes stepping into a relative vacuum of knowledge, especially if you make a good faith effort to, you know, really figure out what they want and work with folks and educate them, you can to some extent overcome partisanship. Austin Carson (15:58): And you can either do that because you're in a total vacuum, right? Where nobody's really talking about this yet, right? Like, again, conversations about recommender models on social media sites have been going on for a while, but at the same time, that's still a technology that's like evolving super rapidly, right? And so the conversation about it and where you can stand on it changes. It doesn't have to be just about, oh, they're blocking me or somebody else on Twitter, right? But if you move a step beyond that and get into this frontier technology I was mentioning, there's so much stuff nobody is in any way discussing outside of just kind of scratching at the edges of it. You know, there are so many things that could be identified for folks to work together on or like to do like some type of oversight or even job owning on, right? Austin Carson (16:47): For being honest. Um, that I think that you can to some extent get around the bipartisan, you know, kind of the horrible cynicism and partisan ranker. And then the kind of carry-on thing to that is because it is both nascent and because there are a ton, a ton of kind of opportunities to do what seems like smaller work, but is actually very significant work if that makes sense. Little provisions of law, little additions to how things are tested or evaluated or, you know, the expansive or lack of expansiveness in programs or, you know, who is considered in things, what agencies are involved, who as headcount. I mean, just things that seem more trivial than normal can have a really big impact in a way that is again, kind of just proportionate to the normal universe. And so those are kind of cohi, maru ways you can get at what are bigger issues without lighting the political, you know, the political torch. Austin Carson (17:52): Uh, and then I can feel, I got a couple more things I wanna say. Okay. So it is really, really important to never underestimate the empower the, um, power of a constituent connection. And I don't even mean that in the regular sense of like, yeah, you have to bring in their constituent and your trade association, but I mean, finding the people on the ground that are doing the exact or as close the exact thing as what the member wants to do or would want to do, right? And like working out with them, how it relates to the district and what the opportunities are, and then kind of getting them excited to work with you on the thing you want to work on. And again, that's not, it's a, it's a, it's a heavier lift, it's a more groundwork, it's a pain, right? Um, but I mean, I, we've found super interesting that I had no idea thank you, Catherine, that I had no idea, uh, existed as we've moved around the country in AI across America, which I'll return to in a moment. Austin Carson (18:53): Um, but super, super important. And so then let's go to kind of what are the, you know, what are the action points that forward emerging technology conversation is particularly useful? Um, I wanna return back to the point about small things are much more meaningful than ever before. Um, finding out, you know, searching out, searching those things out are kind of your ultimate lever. Like I would say in an ideal world if you have the time and investment and a big enough nerd that loves this stuff, you could, you can get ahead of lobbying, right? Like, I've never, I feel like I, my lobbying percentage has never been above. Like I don't, some 3%, I have no idea. Because the vast majority of the time, if you work out these concepts and figure out what needs to be done, it's like the process runs after that and you don't have to go back and do too much. Austin Carson (19:47): And, again, a lot of that is functionally based upon how much you do that legwork at the beginning to make sure that you're in kind of this agreeable space. And ideally, you're living in a win-win space where you're able to address like a number of different, a number of different equities at the same time and, and understand how to approach things, package them, and, with intellectual honesty account for the different folks at play at the, at the very beginning. So that things that if reframed without kind of the incentive structure in place would be politically unable, right? Like if the conversation and the table setting of the thing was captured, it wouldn't really work out if somebody decided to blow it up. But if you can make it just, it's kind of useful, win-win core, and I'll give an example of this in a second, but like useful win-win core, then you can move forward a little bit from there. Austin Carson (20:44): Uh, and let's see, anything else? Oh, and then, so I would say two of the most useful things to do when you need to approach at either a more extended education phase or a higher level of education on a particular topic, and this is no surprise to any of you I'm sure, but I would say is disproportionately useful to invest in. And investment in part is what I think gets lost a lot of times. But like, invest in the coordination entity, right? So like be intentional about working with the AI caucus or be intentional about working with the Congressional Tech staff association, right? Think about and as much as you can about how you can kind of add value in a way that does also address your concerns. Cuz the more that you add value, and this is a lot more than just having a meeting and being like, please use this as a resource, Right? Austin Carson (21:39): Which I just legitimately banned people at a video from ever saying, because I'm like, Dude, I heard that legitimately like 50 times a week. It is semantic satiation. It's like when you say the word over and over again, it loses all meaning after the 10th time, You know? Then I think at this juncture it is rote and you do wanna show a level of intentionality, especially when it's an issue that people are scared to talk about, right? Like, they're embarrassed. I was embarrassed, I was the AI guy and I was like, Oh no, I don't know anything compared to these guys. I'm just gonna try to seem smart for a second. You know, which puts you in a bad spot, right? So I think kind of demonstrating that, being proactive about it is one of the best ways. And again, being, having stuff over time that, you know, show people that you're bringing something that's new and has an actionable edge to it. Austin Carson (22:30): And so I think those are kind of my, Oh, and then the final point on that is to also focus on supporting kind of the other entities that do educational work. Um, and figuring out how you do not capture or like interject or anything, but legitimately, is there a specific value that we can come in and inject? Is there something that we can do to feed in that answers questions that they are posing or kind of file, you know, is a new development? It's very interesting. It's something that is impactful and folks should know about, right? I think there's one side of this game where people are just kind of going into third-party validators and just being like, Hey, we think this is ideologically aligned with you. You wanna check it out briefly and then just publish it, You know? And then I think there's the flip side where again, to the pre, you're not lobbying where it's like you're helping to look at what things are being established and how you can make the foundation of a crackable for folks, you know? Austin Carson (23:29): Um, and this is the thing I used to always try to explain whenever you're talking to lawyers internally, it's like the important thing here is not like you need to be accurate, right? But the important thing here is that it's crackable and then accurate, you know, people have to, to be able to really grasp what it is that you're, that you're talking about and what it means for them quickly. Whereas if you roll down a big list of stuff off of a marketing document or a one-pager, everyone's gonna zone out, you know, unless it's just a super valuable thing that's included in that one-pager marketing document that they need. Like folks are like, All right, nice, well we did our favor for that guy. All right, nice. We did our favor for that guy. You know? And I think that's a place that folks kind of live. Austin Carson (24:16): Um, and so from there, if nobody has anything they wanna pop into the chat, I will kind of move on to, you know, our exemplar of, you know, this for us and our try at the ultimate win-win. You know, it was my, my real shot into maybe we can make everybody happy, you know, And I'm not that naive, but we can make a lot of people happy, I think. So coming back to the original premise, you know, of c ai, uh, yeah, Connor, I feel coming back to the original premise of AI and some of the things that I've discussed, you know, we had so many ultimately requests from congressional offices, um, while I was still at a video and trying to figure out like, how can I package this up for folks in a way that's again, really focused on burning resources on like them, even if it's not helpful to any individual entity, right? Austin Carson (25:09): And, you know, and it really comes to the fact of resourcing, right? And the determination to be intentional about things, to like demonstrate the value you have to invest in it yourself. And as a, you know, if you, as a larger corporation, if you have a nonprofit arm or something, that could be a great place to do that, to demonstrate the investment and try to be fair-minded about it. But, you know, if, if not, or if it's kind of a standalone thing, you know, coming in as, Hey, we've all been talking for a while about y'all wanting kind of like the outputs of this conversa, you know, this conversation of how can we make this work for your district, your constituents, you know, how, what are the base components? How can we start stringing 'em together? What's the objective? And, you know, move towards, okay, I need staff to work this up. Austin Carson (25:56): I need ways to package it, right? I need ways to make it, to make people understand why it would be immediately valuable to them. And for the folks that, you know, participate to be able to, you know, turn around and have something solid stand on. Uh, and so we did, you know, effectively a kickoff event, laid the table with or set the table kind of with, you know, what are the main things we're talking about? National AI research resource, the NSF new technology directorate, who are the folks involved? You got some government officials, some, you know, private sector folks. You've got, you know, the folks that have written some of these aspects of it. You've got, you know, different analysts across the line. We have some students, we have some startups, you know, and we're again, trying to cover the broad universe of people who are impacted as you build out or would build out these kinds of resources for folks to get involved. Austin Carson (26:50): And so, you know, in laying that groundwork and then stating at this event very clearly, like there is an opportunity now that has not existed anytime in the recent past, right? There is effort and now mostly or partially halfway successful effort, right? To massively invest in folks building, testing, prototyping, researching AI, and applying it to what you know, their lives as they are already. And now we can really sprint at it and there's a lot of effort for the inclusion and adaptation of people's different strengths and circumstances. And then from there, it's about, you know, practically stringing it together. The first thing, which is the, you know, to the point about kind of packaging and finding the solution that works for folks around the board, but is also very necessary is, you know, looking at this question of, um, you know, safety and testing and inclusion and diversity representation, um, application level stuff, um, getting community colleges involved, right? Austin Carson (27:59): As opposed to just R one s, having the R one s have an opportunity, the research top research institutions have an opportunity to work with the companies and community colleges, right? There are all these questions that are kind of living there. And there's one, you know, kind of encapsulated answer, which is, you know, the idea of like a test bed and working environment, You know, it's a place where some like high-level sensitive research can happen, where you can take things like large language models, put them in context and research them, have the broad universe of people from like NGOs to safety standards folks to company people to different governmental bodies, you know, kind of participating in this in some way. And then as a collateral benefit, you have the same infrastructure, the same computing, much of the same data, and many of the same people that you would require to do that aspect of it are also incredibly valuable for application and commercialization and public-private partnerships, right? Austin Carson (28:53): And so the, you know, having it together like that keeps it from being something that's kind of charge in one direction or the other and gives you both haves of that coin, right? And there's some conversation we've had about that as like, you know, a working model, you know, and you have a secondary component of, you know, the national AI research resource, which is this big investment in shared computing data instead of having to worry about fighting for x billions of dollars for, you know, something as contentious as, uh, and you can see some work, we did this on our website, but for something, you know, potentially contentious like a, you know, Jedi contract ish thing too, you know, instead we can turn around and say, well, you'll need a central piece. But if we can build out all these kinds of working environments and testbeds through the other stuff coming through chips and science, they can form, you know, the majority of that kind of compute and which you need is the center, right? Austin Carson (29:52): And so again, it's a way to take something that you still, and again, in my view, this is ideally how this should all pretty much take place, right? I'm not even being sly or anything. I'm just like, yeah, this is what I think should happen and I think it's helpful. So maybe try that. Um, but you're able to remove that kind of top-level barrier and tie it back to like, Hey, we're gonna be able to put one of these things in Alabama. We're gonna put one of these things here or there, there or there. And it's their interconnection of them that makes it valuable, but it also needs to have some substantial tie to everybody or it doesn't accomplish the purpose. I have some disagreements about that, with people I like and respect and that's okay. So take away from my can't please everybody thing, but the core objective, right? Austin Carson (30:31): The core mission is still shared and my differences are just based on our work and our research, but those are differences because we kind of operate from that shared victory standpoint. It's a lot easier to discuss, right? And because we are a 5 0 1 C three, and this is again my personal whatever, it's not you have to do this, it's just a slight change in strategy. But especially because we are a c3, it makes it a lot easier to be like, these opinions are based upon this stuff and we're happy to change 'em. If you guys have a difference of opinion, that's reasonable to us. You know, and we're just trying to hone in on that what works best for, for x, y, z thing I've been through. Um, and again, we find that to be not only effective from an advocacy or political maneuvering standpoint, but we also find it effective from just a goodwill perspective, right? Austin Carson (31:23): And a willingness, uh, for folks to, to collaborate with us, um, and for ourselves to get smarter, to be honest. I mean, have, being able to have that posture itself is a, um, get another video up for y'all. I know you're gonna like it, but, um, being able to just have that perspective means everywhere we go, we learn something that improves upon what we're doing. And again, it's not that everybody doesn't do that, but we have the luxury of not having 10 K's to have to file, you know, and shareholders to have to answer to. We have a board, but you know, shareholders have to answer to you outside of being mission-driven. And at the same time, right? The luxury that is like taking that luxury for us doesn't mean that we aren't helpful to those other entities as well. Because our mission and our goal are inherently designed to be a public and private cooperative, right? Austin Carson (32:16): I mean, one of our tenants is like, everybody should get paid. And I, I mean that's sprung out of, you know, the fact that certain programs were pretty much just farming out work and not paying people, but we think everybody should get paid. So it's like whatever, whoever wants to work and collaborate on this, in my view, it shouldn't be like a burden you have to carry. It should be an exciting opportunity, right? So I don't think any corporations are mad at me for being like, Hey, if you come up with a good plan, this is all a hundred million dollars worth of stuff and it helps people out in a way that is like demonstrable reasonable and you're, you know, collaborative and not trying to have a kind of seized thing here. That's awesome. We'd love for that to happen. We'd love to talk about how great it is, you know? Austin Carson (32:59): And so for us, it does ultimately come down to kind of the groundwork. I think that's what's valuable to pretty much the entire universe of stakeholders. It's having not just the, not just kind of the bank of people that are helpful, but also the context and the ability to, you know, cut through to what's interesting to folks in a way, in a way that honestly outside of some kind of national trade associations, you know, I don't think you see, And even in that, even in those instances, we still get kind of a unique take on it because it is very community-centric, right? Like we're trying to see how the whole thing can benefit and where the, you know, I mean some of the issues are incredibly predictable and the solutions aren't easy and you know, there's mostly historical, historical problems, but we try to see what we can do on everything from, you know, the super low hanging fruit. Austin Carson (33:53): Wow. You know, Shell is doing a project to do, uh, like predictive maintenance and these students at this community college are doing the same work. You guys should just pair together and we should figure out how to get that push through to the really, really big picture stuff. You know, it kind of opens that opportunity. So I'm gonna briefly drop into a video to kind of demonstrate, you know, at least for us with this looks like, um, in practice. So this is our second AI Across America event in Chicago. And you will see, right? You know what, just cuz I love you guys. I'm gonna start with one and then I'll take a vote for objection and then show you the other one. So here's from our kickoff at Stanford, right? And the following event was in South Side of Chicago, right? So I think it's pretty clear the contrast that's involved there, but you know, if you go to Stanford and try to see like what has been done to address the issues that were pretty well aware of in tech, right? Austin Carson (34:55): And what happened at the beginning, how did things get screwy, right? How did people fail? How do they succeed? And then it's kind of like, you know, what's been the experience of people in Southside Chicago and how can they, what's, what's being done that's affected what's good? And then like what are the presumptions of others who would be helpful that are incorrect and how can we help rectify those and how can we help team and pair people together and have some educational content to, to get it so that it is jointly functional. So we'll start with, you know, the Stanford side of things. All right, so we'll start with Stanford and then like I said, if nobody objects, we'll uh, we'll move over to Chicago, but all right, here we go. I gotta move. Video Speaker (35:51): SeedAI exists for the purpose of building AI ecosystems across the country in a community-driven way, focusing on underserved people and regions. Video Speaker (36:05): I think that America will be the true leader in ai. We're poised to do that. We have the capacity, we have the people that can lead and shape this effort. Video Speaker (36:22): So there are regions of California where we can beneficially put these engines that can drive innovation ecosystems in those, those regions that have not traditionally benefited from science and engineering research Video Speaker (36:43): Sooner. We provide concrete guardrails and guidance through congress and through industry activism. Better off we're developing this amazing technology. Lena Jensen (37:17): Austin, you're still on mute, <laugh>. Austin Carson (37:21): Okay. So anyways, I was gonna say, any objections to another 90 seconds on Chicago as contrast or any comments or thoughts on that? Like present how that's presented versus what I've been saying so far? I'll give you guys a minute since I think it takes 45 seconds for the questions to populate or something. I don't know little Lena, any thoughts questions as we're waiting? Lena Jensen (37:52): Yeah, one question that came to mind for me. You know, hearing, especially you talk about in the video, touching on that it's really a community-driven vision. I know that recently the White House really sub blueprint for sort of this AI bill of rights. And so would love to hear you speak to what does that mean for AI policy? Um, is that changing the way you're engaging with the executive branch? Any tips you've got for folks engaging with the executive branch around these more complex issues? Would love to hear your thoughts there. Austin Carson (38:25): Yeah, um, real quick and uh, 'cause I closed a window. Okay, so I do have a couple, a couple of thoughts on this, and I won't go on further unless somebody raises it. I'll say to the question of community, I mean to the one thing, and I wasn't doing anything on biometrics or didn't have et cetera, so I wasn't personally miffed, but you know, I think for the, Hey, thanks, Jessica. I think for the, um, I think for the question of community, you know, I got a general perception from a lot of people, especially folks in industry I guess who, who felt like the overall stakeholder requests some feedback maybe were not as much as they should have been and that the document was far more substantial than anticipated given the fact that it was, I think overall biometrics tables. Um, I would say, you know, another thing is that from the overall, so we wrote a letter, well on I said we supported a letter right? Austin Carson (39:25): That, um, you know, went out from the folks who had drafted the Nair and the National AI research Resource task force, which was passed into law last Congress. Um, and you know, kind of moving forward to say as the AI bill of Rights, I think as some folks were saying that the AI Bill of rights in the near were incompatible, that if we did a national AI research resource, it would, you know, inherently be captured by a large company or it would, you know, focus on the wrong types of research. There was just a pretty significant blowback. Um, and so, you know, there was a question of if you're, there are a lot of things that I think we largely agree on as kind of people like policy people maybe in general and, and AI policy people as a, as a subset of that. And, you know, I think technol technology should unquestionably be safe, right? Austin Carson (40:18): Unquestionably, be trustworthy, right? I think, you know, having a human adjudicator in all circumstances, I don't know, but you should at least have, you should have some recourse probably, you know, I mean I think these are all kind of broad principles that we either agree with or have like a substantially similar opinion. Um, but my view is without the environment, you know, okay, so the n uh, NIST has been working on an AI risk management framework for some time and holding hearings on it or having kind of like public sessions about it. And they had the 11th panel, they had a conversation about testing and validation, you know, and the overwhelming, overwhelming statement was context, context, context, all the thing that matters most is where is contextually where and how technology is applied, right? And so, you know, kind of any broad mechanism without that contextual dive in on things that are not obvious, right? Austin Carson (41:16): I think that it's like any transparent, you know, discrimination, the kind of housing, you know, discrimination cases that have come forward. I think that, again, there's existing law, there's straightforward stuff that just should be dealt with, right? But as you get deeper into like technology and context-specific things, we need an environment to test them and to have people participate in that testing, right? A big takeaway for me so far is not just as it like, oh, I can't, you know, I'm not negatively impacted by this, right? It's very unlikely that AI's gonna be like white men, you guys should go to jail more, get denied loans or any of that stuff, you know, But at the same time, it's not just that my life experience is different in terms of the risk, but it's also that I literally did not, you know, I didn't consider like 20% or 30% or 40% of the things people were concerned about and the specific reasons they were concerned about them, right? And so that goes to the contextual point and the second point, you know, along the lines of kind of safety that I think the, you know, the conversation should really at least impart somewhere focus in earnest is like on the same panel, they discuss kind of these frontier technologies, the large language models, which are like G B T three and um, a couple of other similar products that, that are put out by different companies, but the main, public-facing was open AI's G P T three, right? But this idea that like any of the Austin Carson (42:46): One last time, Austin Carson (42:50): There we go. Any of the large language models, they, they're like, I would never agree to test or validate one of those models, right? There's just too much. It's too crazy. We just can't know what's going on. And I feel like the, you know, then the rest of the panel's like Yeah, that's a good point. Yeah. I mean, one guy's like, well I guess we could, but it would just be obsolete by the time we finish validating it and then we kind of like, yeah, it sucks. Move on. I'm like, wait, so we're all just gonna say we don't know how to check this. Like insanely powerful technology outside of just running certain tests, right? And we're fine, I mean, not fine with it, but we're just like, whoa, yes, that's what we gotta do. And stuff is super useful and it's getting commercialized incredibly quickly, right? Austin Carson (43:31): And so super valuable. That value is not being spread as broadly as it could, even though it's inherently flexible no code and creative technology, right? So we're already like, that's also not really being explored until very recently, which I can get into if people care later. And then the final thing, you know, at least for me is that this is the very beginning. Like it feels like we're at the crazy part, but we are before the crazy part still, and it's happening really, really, really, really quickly. Right? If you wanna keep in touch with me, I'm gonna put this in a chat. You gotta, this is one of our board members, but he has the best like AI newsletter with kind of the, well that URL suck, and I also did it wrong, I think. Give it a sec. Yeah. Okay. Anyways, that's right, you always have to copy it. I feel like I idiot. Yeah. All right, lemme try this again. Right, right. You go, guys. Anyways, this is the best thing to, anyways, this is the best thing to keep up with, um, cuz it'll give you a sense for that. And he has some great presentations, one of which is on our YouTube channel. That's like, why is AI so crazy right now? And again, I wanna return back to the point that like, it's crazy. Like it's crazy. Austin Carson (44:50): Please understand it's crazy and that what you're doing to focus on this is super important. Please take it as seriously as you can and call me if I can ever help. But right outta time, we're gonna quickly hit this last video cause you guys are gonna love it, and then we're gonna close out the session. All right, here we go. Ready and go. Video Speaker (45:28): We have a once and a generation opportunity to rebalance the scales of technological power. And if we get this right, you'll be building the AI applications that define our lives rather than being subjected to them. Video Speaker (45:41): You are at the age where you can experiment the most and anything which looks daunting is very rewarding. Video Speaker (45:50): My hope is that together you'll identify opportunities to invest in the innovative potential of people and organizations within your communities, especially those that have been historically marginalized or overlooked. Video Speaker (46:04): We are building up our foundation at home and competing with our strengths. The diversity of background and experience represented across all 50 states. By providing the resource to reach each community to become competitive, the creative potential that we can unleash is unimaginable. Video (46:28): That was my interest because of the diversity of my district. Being a person of color myself, I just wanna make sure that my district is prepared for AI and not afraid of Video Speaker (46:42): Anytime you try to do something and make it take, you've gotta involve the people you are trying to affect in the deepest level possible. Austin Carson (46:55): All right, thank you for attending my screening. I appreciate it. Um, I dropped my email down there. Feel free to reach out if you have any thoughts, questions, or just interest in what we're working on or advice or quickly welcome. [post_title] => Building AI Across America [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => building-ai-across-america [to_ping] => [pinged] => [post_modified] => 2022-10-11 20:41:15 [post_modified_gmt] => 2022-10-11 20:41:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.quorum.us/?post_type=resources&p=7655 [menu_order] => 0 [post_type] => resources [post_mime_type] => [comment_count] => 0 [filter] => raw ) [queried_object_id] => 7655 [request] => SELECT wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_name = 'building-ai-across-america' AND wp_posts.post_type = 'resources' ORDER BY wp_posts.post_date DESC [posts] => Array ( [0] => WP_Post Object ( [ID] => 7655 [post_author] => 12 [post_date] => 2022-10-11 20:41:15 [post_date_gmt] => 2022-10-11 20:41:15 [post_content] => [embed]https://youtu.be/1Jj8YKvQ960[/embed] Austin Carson (02:03): Um, but anyway, so I appreciate you. I appreciate everybody being here. I got a couple of things wound up for you, but I really would like for this to be as interactive as possible. Um, my monologue can get pretty boring after about five or 10 minutes, so I'd like to make sure this is tuned toward y'all. And I live in this world, so it's easy for me to baseball, but my goal here is to start with, you know, what's the background for me and my organization and why do I have any bearing on this? And then to some kind of general tenants that I have for communicating with folks, especially on emerging technologies. Uh, and then I'll move into kind of an instance, um, of what we've worked on for kind of our project AI Across America, which is where the building AI across America name comes from and how, you know, that tries to tie together the things I'm describing to you towards the end of, you know, a plan we identified for building up AI capacity and innovation ecosystems. Austin Carson (03:03): So very quickly, like I said, Austin Carson, I'm the founder and president of C ai, um, and also work with a lot of other organizations on kind of the same front, especially around congressional education and c ai and organization I founded about a year ago. So we're right in that kind of still peak, everything's crazy time, uh, you know, around the idea of working directly with communities to help architect a plan for them to build an AI or an innovation ecosystem that supports AI research development, application testing, education, and workforce development. So I know that sounds like a really big bite to take outta something. Uh, and I think the important thing to tag on there is, you know, kind of the architecting and the ecosystem component because there are a ton of really good people that are working on each of these individual components for their respective circumstances in their communities. Austin Carson (04:00): And so to help bring those together and then show how they function in different environments and help provide kind of a blueprint or a plan for public officials, you know, NGOs, private sector folks to participate in that is, um, kind of lifts up that entire, that entire boat and gives you a way to help different people accomplish their objectives in kind of a super win-win scenario. Before that, I worked at Nvidia for three and a half years, which if you're not familiar, they make, um, the computing platform that powers something. I mean, it was 92% last time I checked a year or two ago, but it powers somewhere between probably 80 and 90% of all artificial intelligence training. Um, and then I, you know, really worked closely with their technical staff and their kind of bigger picture business staff to stand up the government affairs department, uh, and think through, you know, what's the primary, you know, what's the primary added value for organization as a kind of big ecosystem player in ai? Austin Carson (05:04): And that is educating, right? Trying to give that experience a little bit more broadly. Uh, and then about, I would say probably a year before I left, maybe a year and a half, we started working on a project that really pushed aside a lot of the kind of misconceptions about the inevitability of AI development being or not being in certain environments, right? It was a big full spectrum investment and the computing and the data and the education at in Florida, the University of Florida, and then a lot of effort locally and with national, uh, like NSF investments to help mo like adjust the curriculum they created, move it to other parts of the university ecosystem, in particular for, you know, folks at, um, minority-serving institutions, historically black colleges and universities, community colleges, uh, creative institutions. I mean, a pretty broad spectrum of folks who, again, as a general matter, there's a strong negativity approaching nihilism of, of kind of having this broad inclusion and representation. Austin Carson (06:10): And I loved it. <laugh>, obviously since I jumped off of a gravy train and decided to start a nonprofit, which wouldn't advise it. Um, and so, you know, really trying to expand that out over time and think about how it's valuable and actionable for kind of the broad group of stakeholders I was engaging with on a daily basis. And kind of tying that together with my knowledge from, you know, educating folks is how I got to kind of the place I am right now. And then before that, I worked for another nonprofit and was executive director for a bit TechFreedom. And then before that, I worked on the Hill for like six and a half years. And my last boss was, uh, Congressman McCall, who is now the vice chair and soon-to-be chair of the Congressional Artificial Intelligence Caucus. So that's kind of my thread drawn all the way through this thing. Um, next I'll get to some kind of general principles of how I thought about this, and some quick advice before that I will kind of expand on and give a reference point with 90 seconds of your life to watch a video. So apologize for making you do it, but I think it really ties stuff together in a cleaner way than I just did. Um, and the methodology behind it is again tuned toward what I'm about to kind of go through Austin Carson (07:22): Here. AI isn't what you see in the movies, it's not, Austin Carson (07:27): All right, here we go. [Video Speaker] (07:28): Magic or some alien force. Artificial intelligence is a reflection of people created by people. It already powers the devices and services we use every day. And with the pace of research, we'll go from these small systems to complex technology that can change everything. But right now we have a problem. AI is moving fast in becoming so costly that public investments and small players can't keep up because AI is a reflection of the people involved. We need to ensure that tech won't harm people that don't look, act, or sound like those who made it the good news. There's an answer. By investing locally, we can grow a diverse generation of AI, dreamers, and creators from across the country. America is ready for action and seed AI is making it easier for everyone to work together. We're curating an expert network of thinkers and doers who will help refine our work and support new initiatives. We're creating policy perspectives and detailed actionable guidance for decision-makers around initiatives that contribute to AI access and more local capacity. Finally, we're helping to build the programs themselves, working with communities to bring everything together. Seed AI is building a more inclusive representative future where we'll solve problems and improve lives in ways we can't get Imagine, learn more about how we'll [email protected]. Austin Carson (08:57): All right, well, hopefully, that wasn't too onerous, but you know what I'll jump to in a second about some of the communication and packaging of this stuff. Um, I'd like your feedback as we move on about, you know, what you like about it, dislike or have questions, um, and you know, how you take those thoughts in relationship to anything that I shared display from here on out. So going to, you know, how do you engage with Congress? How do we get from the kind of, you know, me working for Congressman McCaul just kind of randomly thinking about these things, and you know how I had to balance equities all the way up that ladder, right? The first step of it is you have to establish a baseline of knowledge in terms of where, like, where people are and where you have to get them. Austin Carson (09:47): And then you have to establish the incentives for that knowledge, right? And they and they range kind of the gamut. And for congressional staff, there certainly are, and for other government officials and honestly for executives at, at private companies, um, there are like kind of a clear incentive system in their mind that ties back to, you know, in a way that you may not necessarily comprehend. But each of them, especially at kind of this executive or operational, like high level or like decision making operational level, has that heuristic device, right? And there are commonalities that rung between them. And those are ultimately the best way for you to approach a complex topic, right? So again, start with their motivation. If you're talking about congressional staff, I would argue that in this, you know, or even I think folks, maybe a percentage of the folks in perhaps policy positions and executive and possibly even private, I haven't considered as much, but I think especially for congressional staff, you kind of cut into a couple different, uh, like operating frameworks depending upon for emerging technology work, kind of depending on where they sit and where their district is, right? Austin Carson (11:02): So the first is, you know, your true nerds, the people that like really love this stuff, really want to get into it. And any expert you bring around, they'll be excited about it as you are. And I mean, the only way to really work with that category in the first place is to also really feel that way, right? And be really excited about where you're gonna present to them, or at least understand what people that feel like that want, you know, and how they internalize it. But ultimately speaking with that group, it's important to keep in mind that they still don't actually probably have deep knowledge on the topic, right? Even if they love it, they've probably still been through like a couple of hearing prep, some meetings, and then, you know, keeping up with it along with the other 12 things they're keeping up with, right? Austin Carson (11:44): So it's still important to remember, and this is a mistake I made a lot before I remember that this is all I think about and nobody else, you know, especially the esoterica of it, and I'll get to that in a second, but it's hard to remember that. And then for the second group, you've got folks that are, you know, interested but not to the point where they'll devote that kind of fraction of their mind, that congressional staff for any have, for any focused work, you know, and they like to learn, but I think they're like, eyes are bigger than their stomach if that makes sense. And also a life I've lived. Um, and so, you know, there's an interest in, you know, enough, like a functional knowledge of the technology at least to kind of add to that heuristic device. And then there's an interest in how it's beneficial, right? Austin Carson (12:27): How they can add it to something they're working on for, you know, their bosses ambitions, the district ambitions, kind of what their ambitions are for their career. And you can see how this would cut in different directions for the other group. So I think it is especially strong given the divided attention and like strong motivators present in those legislative offices and some executive offices. Um, and then the final group is like, whatever, you know, I think it's kind of like a, if you bring a compelling argument that has a clear benefit for, again, those three stated boss office self ambition, then I think that you can still kind of make that education. But it's important to know that if you kind of amp up the line and try to bring them your director of research to dig into some stuff, it's not necessary and possibly not even helpful. Austin Carson (13:13): And I would wind back to say for all of these groups, the packaging is super important in addition to the understanding of incentives, right? You have to consider the timeframe, the level of interest, and then again, to the list device, it's their district, their boss themselves. And then for the overall operation, you can add into it, you know, what does the committee want? What does leadership want, right? What are the other people in their state doing? What's happening in their state, right? And then again, what are they, what are they bringing home for any of those folks? Um, so, you know, kind of moving a little bit beyond that, what's, what are mechanisms for action, right? And in fact, let me, I'll go to some kind of general principles, and then if anybody has questions, drop 'em in the chat and I'll try to answer those. Austin Carson (14:04): But otherwise, I'll happily move on. Um, so, you know, the first thing I'll say is, on one hand, AI and, you know, with the deep learning kind of revolution, right? But modern ai, while it feels like it's been moving for a while, it's moving fast, it's still super nascent and it's super, it's super nascent in the sense that, you know, we're on the like breakthrough point to some seemly, absurdly advanced technology that we barely understand, to be honest, right? And that the policies that we've really gone through and the industry that we've addressed in the past is about to fundamentally change in some ways by the advent of, um, I mean in particular large transformer models and, you know, more advanced reinforcement learning. And we can, you know, know, have some resources drop for technology or for education if anybody wants to dig deeper. Austin Carson (14:56): But we are like moving past this inflection point, and we are at Genesis, you know, uh, and to be honest, for that reason, I'm really thankful that you're trying to learn about this or you're interested in learning about it because it is very, very important. Uh, critically, I cannot overstate this. It's not just that I love my idea that I went and jumped off a cliff for starting seed ai. It's because I am firmly convinced that it is critical that we Saturday morning cartoons work together and get this right and do our best. You know, we literally need to, I, I'm not gonna be so histrionic to say the survive is a species, but it will not be enjoyable if we screw this one up. Um, and the second thing I'll say I'm finding is that because you're oftentimes stepping into a relative vacuum of knowledge, especially if you make a good faith effort to, you know, really figure out what they want and work with folks and educate them, you can to some extent overcome partisanship. Austin Carson (15:58): And you can either do that because you're in a total vacuum, right? Where nobody's really talking about this yet, right? Like, again, conversations about recommender models on social media sites have been going on for a while, but at the same time, that's still a technology that's like evolving super rapidly, right? And so the conversation about it and where you can stand on it changes. It doesn't have to be just about, oh, they're blocking me or somebody else on Twitter, right? But if you move a step beyond that and get into this frontier technology I was mentioning, there's so much stuff nobody is in any way discussing outside of just kind of scratching at the edges of it. You know, there are so many things that could be identified for folks to work together on or like to do like some type of oversight or even job owning on, right? Austin Carson (16:47): For being honest. Um, that I think that you can to some extent get around the bipartisan, you know, kind of the horrible cynicism and partisan ranker. And then the kind of carry-on thing to that is because it is both nascent and because there are a ton, a ton of kind of opportunities to do what seems like smaller work, but is actually very significant work if that makes sense. Little provisions of law, little additions to how things are tested or evaluated or, you know, the expansive or lack of expansiveness in programs or, you know, who is considered in things, what agencies are involved, who as headcount. I mean, just things that seem more trivial than normal can have a really big impact in a way that is again, kind of just proportionate to the normal universe. And so those are kind of cohi, maru ways you can get at what are bigger issues without lighting the political, you know, the political torch. Austin Carson (17:52): Uh, and then I can feel, I got a couple more things I wanna say. Okay. So it is really, really important to never underestimate the empower the, um, power of a constituent connection. And I don't even mean that in the regular sense of like, yeah, you have to bring in their constituent and your trade association, but I mean, finding the people on the ground that are doing the exact or as close the exact thing as what the member wants to do or would want to do, right? And like working out with them, how it relates to the district and what the opportunities are, and then kind of getting them excited to work with you on the thing you want to work on. And again, that's not, it's a, it's a, it's a heavier lift, it's a more groundwork, it's a pain, right? Um, but I mean, I, we've found super interesting that I had no idea thank you, Catherine, that I had no idea, uh, existed as we've moved around the country in AI across America, which I'll return to in a moment. Austin Carson (18:53): Um, but super, super important. And so then let's go to kind of what are the, you know, what are the action points that forward emerging technology conversation is particularly useful? Um, I wanna return back to the point about small things are much more meaningful than ever before. Um, finding out, you know, searching out, searching those things out are kind of your ultimate lever. Like I would say in an ideal world if you have the time and investment and a big enough nerd that loves this stuff, you could, you can get ahead of lobbying, right? Like, I've never, I feel like I, my lobbying percentage has never been above. Like I don't, some 3%, I have no idea. Because the vast majority of the time, if you work out these concepts and figure out what needs to be done, it's like the process runs after that and you don't have to go back and do too much. Austin Carson (19:47): And, again, a lot of that is functionally based upon how much you do that legwork at the beginning to make sure that you're in kind of this agreeable space. And ideally, you're living in a win-win space where you're able to address like a number of different, a number of different equities at the same time and, and understand how to approach things, package them, and, with intellectual honesty account for the different folks at play at the, at the very beginning. So that things that if reframed without kind of the incentive structure in place would be politically unable, right? Like if the conversation and the table setting of the thing was captured, it wouldn't really work out if somebody decided to blow it up. But if you can make it just, it's kind of useful, win-win core, and I'll give an example of this in a second, but like useful win-win core, then you can move forward a little bit from there. Austin Carson (20:44): Uh, and let's see, anything else? Oh, and then, so I would say two of the most useful things to do when you need to approach at either a more extended education phase or a higher level of education on a particular topic, and this is no surprise to any of you I'm sure, but I would say is disproportionately useful to invest in. And investment in part is what I think gets lost a lot of times. But like, invest in the coordination entity, right? So like be intentional about working with the AI caucus or be intentional about working with the Congressional Tech staff association, right? Think about and as much as you can about how you can kind of add value in a way that does also address your concerns. Cuz the more that you add value, and this is a lot more than just having a meeting and being like, please use this as a resource, Right? Austin Carson (21:39): Which I just legitimately banned people at a video from ever saying, because I'm like, Dude, I heard that legitimately like 50 times a week. It is semantic satiation. It's like when you say the word over and over again, it loses all meaning after the 10th time, You know? Then I think at this juncture it is rote and you do wanna show a level of intentionality, especially when it's an issue that people are scared to talk about, right? Like, they're embarrassed. I was embarrassed, I was the AI guy and I was like, Oh no, I don't know anything compared to these guys. I'm just gonna try to seem smart for a second. You know, which puts you in a bad spot, right? So I think kind of demonstrating that, being proactive about it is one of the best ways. And again, being, having stuff over time that, you know, show people that you're bringing something that's new and has an actionable edge to it. Austin Carson (22:30): And so I think those are kind of my, Oh, and then the final point on that is to also focus on supporting kind of the other entities that do educational work. Um, and figuring out how you do not capture or like interject or anything, but legitimately, is there a specific value that we can come in and inject? Is there something that we can do to feed in that answers questions that they are posing or kind of file, you know, is a new development? It's very interesting. It's something that is impactful and folks should know about, right? I think there's one side of this game where people are just kind of going into third-party validators and just being like, Hey, we think this is ideologically aligned with you. You wanna check it out briefly and then just publish it, You know? And then I think there's the flip side where again, to the pre, you're not lobbying where it's like you're helping to look at what things are being established and how you can make the foundation of a crackable for folks, you know? Austin Carson (23:29): Um, and this is the thing I used to always try to explain whenever you're talking to lawyers internally, it's like the important thing here is not like you need to be accurate, right? But the important thing here is that it's crackable and then accurate, you know, people have to, to be able to really grasp what it is that you're, that you're talking about and what it means for them quickly. Whereas if you roll down a big list of stuff off of a marketing document or a one-pager, everyone's gonna zone out, you know, unless it's just a super valuable thing that's included in that one-pager marketing document that they need. Like folks are like, All right, nice, well we did our favor for that guy. All right, nice. We did our favor for that guy. You know? And I think that's a place that folks kind of live. Austin Carson (24:16): Um, and so from there, if nobody has anything they wanna pop into the chat, I will kind of move on to, you know, our exemplar of, you know, this for us and our try at the ultimate win-win. You know, it was my, my real shot into maybe we can make everybody happy, you know, And I'm not that naive, but we can make a lot of people happy, I think. So coming back to the original premise, you know, of c ai, uh, yeah, Connor, I feel coming back to the original premise of AI and some of the things that I've discussed, you know, we had so many ultimately requests from congressional offices, um, while I was still at a video and trying to figure out like, how can I package this up for folks in a way that's again, really focused on burning resources on like them, even if it's not helpful to any individual entity, right? Austin Carson (25:09): And, you know, and it really comes to the fact of resourcing, right? And the determination to be intentional about things, to like demonstrate the value you have to invest in it yourself. And as a, you know, if you, as a larger corporation, if you have a nonprofit arm or something, that could be a great place to do that, to demonstrate the investment and try to be fair-minded about it. But, you know, if, if not, or if it's kind of a standalone thing, you know, coming in as, Hey, we've all been talking for a while about y'all wanting kind of like the outputs of this conversa, you know, this conversation of how can we make this work for your district, your constituents, you know, how, what are the base components? How can we start stringing 'em together? What's the objective? And, you know, move towards, okay, I need staff to work this up. Austin Carson (25:56): I need ways to package it, right? I need ways to make it, to make people understand why it would be immediately valuable to them. And for the folks that, you know, participate to be able to, you know, turn around and have something solid stand on. Uh, and so we did, you know, effectively a kickoff event, laid the table with or set the table kind of with, you know, what are the main things we're talking about? National AI research resource, the NSF new technology directorate, who are the folks involved? You got some government officials, some, you know, private sector folks. You've got, you know, the folks that have written some of these aspects of it. You've got, you know, different analysts across the line. We have some students, we have some startups, you know, and we're again, trying to cover the broad universe of people who are impacted as you build out or would build out these kinds of resources for folks to get involved. Austin Carson (26:50): And so, you know, in laying that groundwork and then stating at this event very clearly, like there is an opportunity now that has not existed anytime in the recent past, right? There is effort and now mostly or partially halfway successful effort, right? To massively invest in folks building, testing, prototyping, researching AI, and applying it to what you know, their lives as they are already. And now we can really sprint at it and there's a lot of effort for the inclusion and adaptation of people's different strengths and circumstances. And then from there, it's about, you know, practically stringing it together. The first thing, which is the, you know, to the point about kind of packaging and finding the solution that works for folks around the board, but is also very necessary is, you know, looking at this question of, um, you know, safety and testing and inclusion and diversity representation, um, application level stuff, um, getting community colleges involved, right? Austin Carson (27:59): As opposed to just R one s, having the R one s have an opportunity, the research top research institutions have an opportunity to work with the companies and community colleges, right? There are all these questions that are kind of living there. And there's one, you know, kind of encapsulated answer, which is, you know, the idea of like a test bed and working environment, You know, it's a place where some like high-level sensitive research can happen, where you can take things like large language models, put them in context and research them, have the broad universe of people from like NGOs to safety standards folks to company people to different governmental bodies, you know, kind of participating in this in some way. And then as a collateral benefit, you have the same infrastructure, the same computing, much of the same data, and many of the same people that you would require to do that aspect of it are also incredibly valuable for application and commercialization and public-private partnerships, right? Austin Carson (28:53): And so the, you know, having it together like that keeps it from being something that's kind of charge in one direction or the other and gives you both haves of that coin, right? And there's some conversation we've had about that as like, you know, a working model, you know, and you have a secondary component of, you know, the national AI research resource, which is this big investment in shared computing data instead of having to worry about fighting for x billions of dollars for, you know, something as contentious as, uh, and you can see some work, we did this on our website, but for something, you know, potentially contentious like a, you know, Jedi contract ish thing too, you know, instead we can turn around and say, well, you'll need a central piece. But if we can build out all these kinds of working environments and testbeds through the other stuff coming through chips and science, they can form, you know, the majority of that kind of compute and which you need is the center, right? Austin Carson (29:52): And so again, it's a way to take something that you still, and again, in my view, this is ideally how this should all pretty much take place, right? I'm not even being sly or anything. I'm just like, yeah, this is what I think should happen and I think it's helpful. So maybe try that. Um, but you're able to remove that kind of top-level barrier and tie it back to like, Hey, we're gonna be able to put one of these things in Alabama. We're gonna put one of these things here or there, there or there. And it's their interconnection of them that makes it valuable, but it also needs to have some substantial tie to everybody or it doesn't accomplish the purpose. I have some disagreements about that, with people I like and respect and that's okay. So take away from my can't please everybody thing, but the core objective, right? Austin Carson (30:31): The core mission is still shared and my differences are just based on our work and our research, but those are differences because we kind of operate from that shared victory standpoint. It's a lot easier to discuss, right? And because we are a 5 0 1 C three, and this is again my personal whatever, it's not you have to do this, it's just a slight change in strategy. But especially because we are a c3, it makes it a lot easier to be like, these opinions are based upon this stuff and we're happy to change 'em. If you guys have a difference of opinion, that's reasonable to us. You know, and we're just trying to hone in on that what works best for, for x, y, z thing I've been through. Um, and again, we find that to be not only effective from an advocacy or political maneuvering standpoint, but we also find it effective from just a goodwill perspective, right? Austin Carson (31:23): And a willingness, uh, for folks to, to collaborate with us, um, and for ourselves to get smarter, to be honest. I mean, have, being able to have that posture itself is a, um, get another video up for y'all. I know you're gonna like it, but, um, being able to just have that perspective means everywhere we go, we learn something that improves upon what we're doing. And again, it's not that everybody doesn't do that, but we have the luxury of not having 10 K's to have to file, you know, and shareholders to have to answer to. We have a board, but you know, shareholders have to answer to you outside of being mission-driven. And at the same time, right? The luxury that is like taking that luxury for us doesn't mean that we aren't helpful to those other entities as well. Because our mission and our goal are inherently designed to be a public and private cooperative, right? Austin Carson (32:16): I mean, one of our tenants is like, everybody should get paid. And I, I mean that's sprung out of, you know, the fact that certain programs were pretty much just farming out work and not paying people, but we think everybody should get paid. So it's like whatever, whoever wants to work and collaborate on this, in my view, it shouldn't be like a burden you have to carry. It should be an exciting opportunity, right? So I don't think any corporations are mad at me for being like, Hey, if you come up with a good plan, this is all a hundred million dollars worth of stuff and it helps people out in a way that is like demonstrable reasonable and you're, you know, collaborative and not trying to have a kind of seized thing here. That's awesome. We'd love for that to happen. We'd love to talk about how great it is, you know? Austin Carson (32:59): And so for us, it does ultimately come down to kind of the groundwork. I think that's what's valuable to pretty much the entire universe of stakeholders. It's having not just the, not just kind of the bank of people that are helpful, but also the context and the ability to, you know, cut through to what's interesting to folks in a way, in a way that honestly outside of some kind of national trade associations, you know, I don't think you see, And even in that, even in those instances, we still get kind of a unique take on it because it is very community-centric, right? Like we're trying to see how the whole thing can benefit and where the, you know, I mean some of the issues are incredibly predictable and the solutions aren't easy and you know, there's mostly historical, historical problems, but we try to see what we can do on everything from, you know, the super low hanging fruit. Austin Carson (33:53): Wow. You know, Shell is doing a project to do, uh, like predictive maintenance and these students at this community college are doing the same work. You guys should just pair together and we should figure out how to get that push through to the really, really big picture stuff. You know, it kind of opens that opportunity. So I'm gonna briefly drop into a video to kind of demonstrate, you know, at least for us with this looks like, um, in practice. So this is our second AI Across America event in Chicago. And you will see, right? You know what, just cuz I love you guys. I'm gonna start with one and then I'll take a vote for objection and then show you the other one. So here's from our kickoff at Stanford, right? And the following event was in South Side of Chicago, right? So I think it's pretty clear the contrast that's involved there, but you know, if you go to Stanford and try to see like what has been done to address the issues that were pretty well aware of in tech, right? Austin Carson (34:55): And what happened at the beginning, how did things get screwy, right? How did people fail? How do they succeed? And then it's kind of like, you know, what's been the experience of people in Southside Chicago and how can they, what's, what's being done that's affected what's good? And then like what are the presumptions of others who would be helpful that are incorrect and how can we help rectify those and how can we help team and pair people together and have some educational content to, to get it so that it is jointly functional. So we'll start with, you know, the Stanford side of things. All right, so we'll start with Stanford and then like I said, if nobody objects, we'll uh, we'll move over to Chicago, but all right, here we go. I gotta move. Video Speaker (35:51): SeedAI exists for the purpose of building AI ecosystems across the country in a community-driven way, focusing on underserved people and regions. Video Speaker (36:05): I think that America will be the true leader in ai. We're poised to do that. We have the capacity, we have the people that can lead and shape this effort. Video Speaker (36:22): So there are regions of California where we can beneficially put these engines that can drive innovation ecosystems in those, those regions that have not traditionally benefited from science and engineering research Video Speaker (36:43): Sooner. We provide concrete guardrails and guidance through congress and through industry activism. Better off we're developing this amazing technology. Lena Jensen (37:17): Austin, you're still on mute, <laugh>. Austin Carson (37:21): Okay. So anyways, I was gonna say, any objections to another 90 seconds on Chicago as contrast or any comments or thoughts on that? Like present how that's presented versus what I've been saying so far? I'll give you guys a minute since I think it takes 45 seconds for the questions to populate or something. I don't know little Lena, any thoughts questions as we're waiting? Lena Jensen (37:52): Yeah, one question that came to mind for me. You know, hearing, especially you talk about in the video, touching on that it's really a community-driven vision. I know that recently the White House really sub blueprint for sort of this AI bill of rights. And so would love to hear you speak to what does that mean for AI policy? Um, is that changing the way you're engaging with the executive branch? Any tips you've got for folks engaging with the executive branch around these more complex issues? Would love to hear your thoughts there. Austin Carson (38:25): Yeah, um, real quick and uh, 'cause I closed a window. Okay, so I do have a couple, a couple of thoughts on this, and I won't go on further unless somebody raises it. I'll say to the question of community, I mean to the one thing, and I wasn't doing anything on biometrics or didn't have et cetera, so I wasn't personally miffed, but you know, I think for the, Hey, thanks, Jessica. I think for the, um, I think for the question of community, you know, I got a general perception from a lot of people, especially folks in industry I guess who, who felt like the overall stakeholder requests some feedback maybe were not as much as they should have been and that the document was far more substantial than anticipated given the fact that it was, I think overall biometrics tables. Um, I would say, you know, another thing is that from the overall, so we wrote a letter, well on I said we supported a letter right? Austin Carson (39:25): That, um, you know, went out from the folks who had drafted the Nair and the National AI research Resource task force, which was passed into law last Congress. Um, and you know, kind of moving forward to say as the AI bill of Rights, I think as some folks were saying that the AI Bill of rights in the near were incompatible, that if we did a national AI research resource, it would, you know, inherently be captured by a large company or it would, you know, focus on the wrong types of research. There was just a pretty significant blowback. Um, and so, you know, there was a question of if you're, there are a lot of things that I think we largely agree on as kind of people like policy people maybe in general and, and AI policy people as a, as a subset of that. And, you know, I think technol technology should unquestionably be safe, right? Austin Carson (40:18): Unquestionably, be trustworthy, right? I think, you know, having a human adjudicator in all circumstances, I don't know, but you should at least have, you should have some recourse probably, you know, I mean I think these are all kind of broad principles that we either agree with or have like a substantially similar opinion. Um, but my view is without the environment, you know, okay, so the n uh, NIST has been working on an AI risk management framework for some time and holding hearings on it or having kind of like public sessions about it. And they had the 11th panel, they had a conversation about testing and validation, you know, and the overwhelming, overwhelming statement was context, context, context, all the thing that matters most is where is contextually where and how technology is applied, right? And so, you know, kind of any broad mechanism without that contextual dive in on things that are not obvious, right? Austin Carson (41:16): I think that it's like any transparent, you know, discrimination, the kind of housing, you know, discrimination cases that have come forward. I think that, again, there's existing law, there's straightforward stuff that just should be dealt with, right? But as you get deeper into like technology and context-specific things, we need an environment to test them and to have people participate in that testing, right? A big takeaway for me so far is not just as it like, oh, I can't, you know, I'm not negatively impacted by this, right? It's very unlikely that AI's gonna be like white men, you guys should go to jail more, get denied loans or any of that stuff, you know, But at the same time, it's not just that my life experience is different in terms of the risk, but it's also that I literally did not, you know, I didn't consider like 20% or 30% or 40% of the things people were concerned about and the specific reasons they were concerned about them, right? And so that goes to the contextual point and the second point, you know, along the lines of kind of safety that I think the, you know, the conversation should really at least impart somewhere focus in earnest is like on the same panel, they discuss kind of these frontier technologies, the large language models, which are like G B T three and um, a couple of other similar products that, that are put out by different companies, but the main, public-facing was open AI's G P T three, right? But this idea that like any of the Austin Carson (42:46): One last time, Austin Carson (42:50): There we go. Any of the large language models, they, they're like, I would never agree to test or validate one of those models, right? There's just too much. It's too crazy. We just can't know what's going on. And I feel like the, you know, then the rest of the panel's like Yeah, that's a good point. Yeah. I mean, one guy's like, well I guess we could, but it would just be obsolete by the time we finish validating it and then we kind of like, yeah, it sucks. Move on. I'm like, wait, so we're all just gonna say we don't know how to check this. Like insanely powerful technology outside of just running certain tests, right? And we're fine, I mean, not fine with it, but we're just like, whoa, yes, that's what we gotta do. And stuff is super useful and it's getting commercialized incredibly quickly, right? Austin Carson (43:31): And so super valuable. That value is not being spread as broadly as it could, even though it's inherently flexible no code and creative technology, right? So we're already like, that's also not really being explored until very recently, which I can get into if people care later. And then the final thing, you know, at least for me is that this is the very beginning. Like it feels like we're at the crazy part, but we are before the crazy part still, and it's happening really, really, really, really quickly. Right? If you wanna keep in touch with me, I'm gonna put this in a chat. You gotta, this is one of our board members, but he has the best like AI newsletter with kind of the, well that URL suck, and I also did it wrong, I think. Give it a sec. Yeah. Okay. Anyways, that's right, you always have to copy it. I feel like I idiot. Yeah. All right, lemme try this again. Right, right. You go, guys. Anyways, this is the best thing to, anyways, this is the best thing to keep up with, um, cuz it'll give you a sense for that. And he has some great presentations, one of which is on our YouTube channel. That's like, why is AI so crazy right now? And again, I wanna return back to the point that like, it's crazy. Like it's crazy. Austin Carson (44:50): Please understand it's crazy and that what you're doing to focus on this is super important. Please take it as seriously as you can and call me if I can ever help. But right outta time, we're gonna quickly hit this last video cause you guys are gonna love it, and then we're gonna close out the session. All right, here we go. Ready and go. Video Speaker (45:28): We have a once and a generation opportunity to rebalance the scales of technological power. And if we get this right, you'll be building the AI applications that define our lives rather than being subjected to them. Video Speaker (45:41): You are at the age where you can experiment the most and anything which looks daunting is very rewarding. Video Speaker (45:50): My hope is that together you'll identify opportunities to invest in the innovative potential of people and organizations within your communities, especially those that have been historically marginalized or overlooked. Video Speaker (46:04): We are building up our foundation at home and competing with our strengths. The diversity of background and experience represented across all 50 states. By providing the resource to reach each community to become competitive, the creative potential that we can unleash is unimaginable. Video (46:28): That was my interest because of the diversity of my district. Being a person of color myself, I just wanna make sure that my district is prepared for AI and not afraid of Video Speaker (46:42): Anytime you try to do something and make it take, you've gotta involve the people you are trying to affect in the deepest level possible. Austin Carson (46:55): All right, thank you for attending my screening. I appreciate it. Um, I dropped my email down there. Feel free to reach out if you have any thoughts, questions, or just interest in what we're working on or advice or quickly welcome. [post_title] => Building AI Across America [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => building-ai-across-america [to_ping] => [pinged] => [post_modified] => 2022-10-11 20:41:15 [post_modified_gmt] => 2022-10-11 20:41:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.quorum.us/?post_type=resources&p=7655 [menu_order] => 0 [post_type] => resources [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 1 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 7655 [post_author] => 12 [post_date] => 2022-10-11 20:41:15 [post_date_gmt] => 2022-10-11 20:41:15 [post_content] => [embed]https://youtu.be/1Jj8YKvQ960[/embed] Austin Carson (02:03): Um, but anyway, so I appreciate you. I appreciate everybody being here. I got a couple of things wound up for you, but I really would like for this to be as interactive as possible. Um, my monologue can get pretty boring after about five or 10 minutes, so I'd like to make sure this is tuned toward y'all. And I live in this world, so it's easy for me to baseball, but my goal here is to start with, you know, what's the background for me and my organization and why do I have any bearing on this? And then to some kind of general tenants that I have for communicating with folks, especially on emerging technologies. Uh, and then I'll move into kind of an instance, um, of what we've worked on for kind of our project AI Across America, which is where the building AI across America name comes from and how, you know, that tries to tie together the things I'm describing to you towards the end of, you know, a plan we identified for building up AI capacity and innovation ecosystems. Austin Carson (03:03): So very quickly, like I said, Austin Carson, I'm the founder and president of C ai, um, and also work with a lot of other organizations on kind of the same front, especially around congressional education and c ai and organization I founded about a year ago. So we're right in that kind of still peak, everything's crazy time, uh, you know, around the idea of working directly with communities to help architect a plan for them to build an AI or an innovation ecosystem that supports AI research development, application testing, education, and workforce development. So I know that sounds like a really big bite to take outta something. Uh, and I think the important thing to tag on there is, you know, kind of the architecting and the ecosystem component because there are a ton of really good people that are working on each of these individual components for their respective circumstances in their communities. Austin Carson (04:00): And so to help bring those together and then show how they function in different environments and help provide kind of a blueprint or a plan for public officials, you know, NGOs, private sector folks to participate in that is, um, kind of lifts up that entire, that entire boat and gives you a way to help different people accomplish their objectives in kind of a super win-win scenario. Before that, I worked at Nvidia for three and a half years, which if you're not familiar, they make, um, the computing platform that powers something. I mean, it was 92% last time I checked a year or two ago, but it powers somewhere between probably 80 and 90% of all artificial intelligence training. Um, and then I, you know, really worked closely with their technical staff and their kind of bigger picture business staff to stand up the government affairs department, uh, and think through, you know, what's the primary, you know, what's the primary added value for organization as a kind of big ecosystem player in ai? Austin Carson (05:04): And that is educating, right? Trying to give that experience a little bit more broadly. Uh, and then about, I would say probably a year before I left, maybe a year and a half, we started working on a project that really pushed aside a lot of the kind of misconceptions about the inevitability of AI development being or not being in certain environments, right? It was a big full spectrum investment and the computing and the data and the education at in Florida, the University of Florida, and then a lot of effort locally and with national, uh, like NSF investments to help mo like adjust the curriculum they created, move it to other parts of the university ecosystem, in particular for, you know, folks at, um, minority-serving institutions, historically black colleges and universities, community colleges, uh, creative institutions. I mean, a pretty broad spectrum of folks who, again, as a general matter, there's a strong negativity approaching nihilism of, of kind of having this broad inclusion and representation. Austin Carson (06:10): And I loved it. <laugh>, obviously since I jumped off of a gravy train and decided to start a nonprofit, which wouldn't advise it. Um, and so, you know, really trying to expand that out over time and think about how it's valuable and actionable for kind of the broad group of stakeholders I was engaging with on a daily basis. And kind of tying that together with my knowledge from, you know, educating folks is how I got to kind of the place I am right now. And then before that, I worked for another nonprofit and was executive director for a bit TechFreedom. And then before that, I worked on the Hill for like six and a half years. And my last boss was, uh, Congressman McCall, who is now the vice chair and soon-to-be chair of the Congressional Artificial Intelligence Caucus. So that's kind of my thread drawn all the way through this thing. Um, next I'll get to some kind of general principles of how I thought about this, and some quick advice before that I will kind of expand on and give a reference point with 90 seconds of your life to watch a video. So apologize for making you do it, but I think it really ties stuff together in a cleaner way than I just did. Um, and the methodology behind it is again tuned toward what I'm about to kind of go through Austin Carson (07:22): Here. AI isn't what you see in the movies, it's not, Austin Carson (07:27): All right, here we go. [Video Speaker] (07:28): Magic or some alien force. Artificial intelligence is a reflection of people created by people. It already powers the devices and services we use every day. And with the pace of research, we'll go from these small systems to complex technology that can change everything. But right now we have a problem. AI is moving fast in becoming so costly that public investments and small players can't keep up because AI is a reflection of the people involved. We need to ensure that tech won't harm people that don't look, act, or sound like those who made it the good news. There's an answer. By investing locally, we can grow a diverse generation of AI, dreamers, and creators from across the country. America is ready for action and seed AI is making it easier for everyone to work together. We're curating an expert network of thinkers and doers who will help refine our work and support new initiatives. We're creating policy perspectives and detailed actionable guidance for decision-makers around initiatives that contribute to AI access and more local capacity. Finally, we're helping to build the programs themselves, working with communities to bring everything together. Seed AI is building a more inclusive representative future where we'll solve problems and improve lives in ways we can't get Imagine, learn more about how we'll [email protected]. Austin Carson (08:57): All right, well, hopefully, that wasn't too onerous, but you know what I'll jump to in a second about some of the communication and packaging of this stuff. Um, I'd like your feedback as we move on about, you know, what you like about it, dislike or have questions, um, and you know, how you take those thoughts in relationship to anything that I shared display from here on out. So going to, you know, how do you engage with Congress? How do we get from the kind of, you know, me working for Congressman McCaul just kind of randomly thinking about these things, and you know how I had to balance equities all the way up that ladder, right? The first step of it is you have to establish a baseline of knowledge in terms of where, like, where people are and where you have to get them. Austin Carson (09:47): And then you have to establish the incentives for that knowledge, right? And they and they range kind of the gamut. And for congressional staff, there certainly are, and for other government officials and honestly for executives at, at private companies, um, there are like kind of a clear incentive system in their mind that ties back to, you know, in a way that you may not necessarily comprehend. But each of them, especially at kind of this executive or operational, like high level or like decision making operational level, has that heuristic device, right? And there are commonalities that rung between them. And those are ultimately the best way for you to approach a complex topic, right? So again, start with their motivation. If you're talking about congressional staff, I would argue that in this, you know, or even I think folks, maybe a percentage of the folks in perhaps policy positions and executive and possibly even private, I haven't considered as much, but I think especially for congressional staff, you kind of cut into a couple different, uh, like operating frameworks depending upon for emerging technology work, kind of depending on where they sit and where their district is, right? Austin Carson (11:02): So the first is, you know, your true nerds, the people that like really love this stuff, really want to get into it. And any expert you bring around, they'll be excited about it as you are. And I mean, the only way to really work with that category in the first place is to also really feel that way, right? And be really excited about where you're gonna present to them, or at least understand what people that feel like that want, you know, and how they internalize it. But ultimately speaking with that group, it's important to keep in mind that they still don't actually probably have deep knowledge on the topic, right? Even if they love it, they've probably still been through like a couple of hearing prep, some meetings, and then, you know, keeping up with it along with the other 12 things they're keeping up with, right? Austin Carson (11:44): So it's still important to remember, and this is a mistake I made a lot before I remember that this is all I think about and nobody else, you know, especially the esoterica of it, and I'll get to that in a second, but it's hard to remember that. And then for the second group, you've got folks that are, you know, interested but not to the point where they'll devote that kind of fraction of their mind, that congressional staff for any have, for any focused work, you know, and they like to learn, but I think they're like, eyes are bigger than their stomach if that makes sense. And also a life I've lived. Um, and so, you know, there's an interest in, you know, enough, like a functional knowledge of the technology at least to kind of add to that heuristic device. And then there's an interest in how it's beneficial, right? Austin Carson (12:27): How they can add it to something they're working on for, you know, their bosses ambitions, the district ambitions, kind of what their ambitions are for their career. And you can see how this would cut in different directions for the other group. So I think it is especially strong given the divided attention and like strong motivators present in those legislative offices and some executive offices. Um, and then the final group is like, whatever, you know, I think it's kind of like a, if you bring a compelling argument that has a clear benefit for, again, those three stated boss office self ambition, then I think that you can still kind of make that education. But it's important to know that if you kind of amp up the line and try to bring them your director of research to dig into some stuff, it's not necessary and possibly not even helpful. Austin Carson (13:13): And I would wind back to say for all of these groups, the packaging is super important in addition to the understanding of incentives, right? You have to consider the timeframe, the level of interest, and then again, to the list device, it's their district, their boss themselves. And then for the overall operation, you can add into it, you know, what does the committee want? What does leadership want, right? What are the other people in their state doing? What's happening in their state, right? And then again, what are they, what are they bringing home for any of those folks? Um, so, you know, kind of moving a little bit beyond that, what's, what are mechanisms for action, right? And in fact, let me, I'll go to some kind of general principles, and then if anybody has questions, drop 'em in the chat and I'll try to answer those. Austin Carson (14:04): But otherwise, I'll happily move on. Um, so, you know, the first thing I'll say is, on one hand, AI and, you know, with the deep learning kind of revolution, right? But modern ai, while it feels like it's been moving for a while, it's moving fast, it's still super nascent and it's super, it's super nascent in the sense that, you know, we're on the like breakthrough point to some seemly, absurdly advanced technology that we barely understand, to be honest, right? And that the policies that we've really gone through and the industry that we've addressed in the past is about to fundamentally change in some ways by the advent of, um, I mean in particular large transformer models and, you know, more advanced reinforcement learning. And we can, you know, know, have some resources drop for technology or for education if anybody wants to dig deeper. Austin Carson (14:56): But we are like moving past this inflection point, and we are at Genesis, you know, uh, and to be honest, for that reason, I'm really thankful that you're trying to learn about this or you're interested in learning about it because it is very, very important. Uh, critically, I cannot overstate this. It's not just that I love my idea that I went and jumped off a cliff for starting seed ai. It's because I am firmly convinced that it is critical that we Saturday morning cartoons work together and get this right and do our best. You know, we literally need to, I, I'm not gonna be so histrionic to say the survive is a species, but it will not be enjoyable if we screw this one up. Um, and the second thing I'll say I'm finding is that because you're oftentimes stepping into a relative vacuum of knowledge, especially if you make a good faith effort to, you know, really figure out what they want and work with folks and educate them, you can to some extent overcome partisanship. Austin Carson (15:58): And you can either do that because you're in a total vacuum, right? Where nobody's really talking about this yet, right? Like, again, conversations about recommender models on social media sites have been going on for a while, but at the same time, that's still a technology that's like evolving super rapidly, right? And so the conversation about it and where you can stand on it changes. It doesn't have to be just about, oh, they're blocking me or somebody else on Twitter, right? But if you move a step beyond that and get into this frontier technology I was mentioning, there's so much stuff nobody is in any way discussing outside of just kind of scratching at the edges of it. You know, there are so many things that could be identified for folks to work together on or like to do like some type of oversight or even job owning on, right? Austin Carson (16:47): For being honest. Um, that I think that you can to some extent get around the bipartisan, you know, kind of the horrible cynicism and partisan ranker. And then the kind of carry-on thing to that is because it is both nascent and because there are a ton, a ton of kind of opportunities to do what seems like smaller work, but is actually very significant work if that makes sense. Little provisions of law, little additions to how things are tested or evaluated or, you know, the expansive or lack of expansiveness in programs or, you know, who is considered in things, what agencies are involved, who as headcount. I mean, just things that seem more trivial than normal can have a really big impact in a way that is again, kind of just proportionate to the normal universe. And so those are kind of cohi, maru ways you can get at what are bigger issues without lighting the political, you know, the political torch. Austin Carson (17:52): Uh, and then I can feel, I got a couple more things I wanna say. Okay. So it is really, really important to never underestimate the empower the, um, power of a constituent connection. And I don't even mean that in the regular sense of like, yeah, you have to bring in their constituent and your trade association, but I mean, finding the people on the ground that are doing the exact or as close the exact thing as what the member wants to do or would want to do, right? And like working out with them, how it relates to the district and what the opportunities are, and then kind of getting them excited to work with you on the thing you want to work on. And again, that's not, it's a, it's a, it's a heavier lift, it's a more groundwork, it's a pain, right? Um, but I mean, I, we've found super interesting that I had no idea thank you, Catherine, that I had no idea, uh, existed as we've moved around the country in AI across America, which I'll return to in a moment. Austin Carson (18:53): Um, but super, super important. And so then let's go to kind of what are the, you know, what are the action points that forward emerging technology conversation is particularly useful? Um, I wanna return back to the point about small things are much more meaningful than ever before. Um, finding out, you know, searching out, searching those things out are kind of your ultimate lever. Like I would say in an ideal world if you have the time and investment and a big enough nerd that loves this stuff, you could, you can get ahead of lobbying, right? Like, I've never, I feel like I, my lobbying percentage has never been above. Like I don't, some 3%, I have no idea. Because the vast majority of the time, if you work out these concepts and figure out what needs to be done, it's like the process runs after that and you don't have to go back and do too much. Austin Carson (19:47): And, again, a lot of that is functionally based upon how much you do that legwork at the beginning to make sure that you're in kind of this agreeable space. And ideally, you're living in a win-win space where you're able to address like a number of different, a number of different equities at the same time and, and understand how to approach things, package them, and, with intellectual honesty account for the different folks at play at the, at the very beginning. So that things that if reframed without kind of the incentive structure in place would be politically unable, right? Like if the conversation and the table setting of the thing was captured, it wouldn't really work out if somebody decided to blow it up. But if you can make it just, it's kind of useful, win-win core, and I'll give an example of this in a second, but like useful win-win core, then you can move forward a little bit from there. Austin Carson (20:44): Uh, and let's see, anything else? Oh, and then, so I would say two of the most useful things to do when you need to approach at either a more extended education phase or a higher level of education on a particular topic, and this is no surprise to any of you I'm sure, but I would say is disproportionately useful to invest in. And investment in part is what I think gets lost a lot of times. But like, invest in the coordination entity, right? So like be intentional about working with the AI caucus or be intentional about working with the Congressional Tech staff association, right? Think about and as much as you can about how you can kind of add value in a way that does also address your concerns. Cuz the more that you add value, and this is a lot more than just having a meeting and being like, please use this as a resource, Right? Austin Carson (21:39): Which I just legitimately banned people at a video from ever saying, because I'm like, Dude, I heard that legitimately like 50 times a week. It is semantic satiation. It's like when you say the word over and over again, it loses all meaning after the 10th time, You know? Then I think at this juncture it is rote and you do wanna show a level of intentionality, especially when it's an issue that people are scared to talk about, right? Like, they're embarrassed. I was embarrassed, I was the AI guy and I was like, Oh no, I don't know anything compared to these guys. I'm just gonna try to seem smart for a second. You know, which puts you in a bad spot, right? So I think kind of demonstrating that, being proactive about it is one of the best ways. And again, being, having stuff over time that, you know, show people that you're bringing something that's new and has an actionable edge to it. Austin Carson (22:30): And so I think those are kind of my, Oh, and then the final point on that is to also focus on supporting kind of the other entities that do educational work. Um, and figuring out how you do not capture or like interject or anything, but legitimately, is there a specific value that we can come in and inject? Is there something that we can do to feed in that answers questions that they are posing or kind of file, you know, is a new development? It's very interesting. It's something that is impactful and folks should know about, right? I think there's one side of this game where people are just kind of going into third-party validators and just being like, Hey, we think this is ideologically aligned with you. You wanna check it out briefly and then just publish it, You know? And then I think there's the flip side where again, to the pre, you're not lobbying where it's like you're helping to look at what things are being established and how you can make the foundation of a crackable for folks, you know? Austin Carson (23:29): Um, and this is the thing I used to always try to explain whenever you're talking to lawyers internally, it's like the important thing here is not like you need to be accurate, right? But the important thing here is that it's crackable and then accurate, you know, people have to, to be able to really grasp what it is that you're, that you're talking about and what it means for them quickly. Whereas if you roll down a big list of stuff off of a marketing document or a one-pager, everyone's gonna zone out, you know, unless it's just a super valuable thing that's included in that one-pager marketing document that they need. Like folks are like, All right, nice, well we did our favor for that guy. All right, nice. We did our favor for that guy. You know? And I think that's a place that folks kind of live. Austin Carson (24:16): Um, and so from there, if nobody has anything they wanna pop into the chat, I will kind of move on to, you know, our exemplar of, you know, this for us and our try at the ultimate win-win. You know, it was my, my real shot into maybe we can make everybody happy, you know, And I'm not that naive, but we can make a lot of people happy, I think. So coming back to the original premise, you know, of c ai, uh, yeah, Connor, I feel coming back to the original premise of AI and some of the things that I've discussed, you know, we had so many ultimately requests from congressional offices, um, while I was still at a video and trying to figure out like, how can I package this up for folks in a way that's again, really focused on burning resources on like them, even if it's not helpful to any individual entity, right? Austin Carson (25:09): And, you know, and it really comes to the fact of resourcing, right? And the determination to be intentional about things, to like demonstrate the value you have to invest in it yourself. And as a, you know, if you, as a larger corporation, if you have a nonprofit arm or something, that could be a great place to do that, to demonstrate the investment and try to be fair-minded about it. But, you know, if, if not, or if it's kind of a standalone thing, you know, coming in as, Hey, we've all been talking for a while about y'all wanting kind of like the outputs of this conversa, you know, this conversation of how can we make this work for your district, your constituents, you know, how, what are the base components? How can we start stringing 'em together? What's the objective? And, you know, move towards, okay, I need staff to work this up. Austin Carson (25:56): I need ways to package it, right? I need ways to make it, to make people understand why it would be immediately valuable to them. And for the folks that, you know, participate to be able to, you know, turn around and have something solid stand on. Uh, and so we did, you know, effectively a kickoff event, laid the table with or set the table kind of with, you know, what are the main things we're talking about? National AI research resource, the NSF new technology directorate, who are the folks involved? You got some government officials, some, you know, private sector folks. You've got, you know, the folks that have written some of these aspects of it. You've got, you know, different analysts across the line. We have some students, we have some startups, you know, and we're again, trying to cover the broad universe of people who are impacted as you build out or would build out these kinds of resources for folks to get involved. Austin Carson (26:50): And so, you know, in laying that groundwork and then stating at this event very clearly, like there is an opportunity now that has not existed anytime in the recent past, right? There is effort and now mostly or partially halfway successful effort, right? To massively invest in folks building, testing, prototyping, researching AI, and applying it to what you know, their lives as they are already. And now we can really sprint at it and there's a lot of effort for the inclusion and adaptation of people's different strengths and circumstances. And then from there, it's about, you know, practically stringing it together. The first thing, which is the, you know, to the point about kind of packaging and finding the solution that works for folks around the board, but is also very necessary is, you know, looking at this question of, um, you know, safety and testing and inclusion and diversity representation, um, application level stuff, um, getting community colleges involved, right? Austin Carson (27:59): As opposed to just R one s, having the R one s have an opportunity, the research top research institutions have an opportunity to work with the companies and community colleges, right? There are all these questions that are kind of living there. And there's one, you know, kind of encapsulated answer, which is, you know, the idea of like a test bed and working environment, You know, it's a place where some like high-level sensitive research can happen, where you can take things like large language models, put them in context and research them, have the broad universe of people from like NGOs to safety standards folks to company people to different governmental bodies, you know, kind of participating in this in some way. And then as a collateral benefit, you have the same infrastructure, the same computing, much of the same data, and many of the same people that you would require to do that aspect of it are also incredibly valuable for application and commercialization and public-private partnerships, right? Austin Carson (28:53): And so the, you know, having it together like that keeps it from being something that's kind of charge in one direction or the other and gives you both haves of that coin, right? And there's some conversation we've had about that as like, you know, a working model, you know, and you have a secondary component of, you know, the national AI research resource, which is this big investment in shared computing data instead of having to worry about fighting for x billions of dollars for, you know, something as contentious as, uh, and you can see some work, we did this on our website, but for something, you know, potentially contentious like a, you know, Jedi contract ish thing too, you know, instead we can turn around and say, well, you'll need a central piece. But if we can build out all these kinds of working environments and testbeds through the other stuff coming through chips and science, they can form, you know, the majority of that kind of compute and which you need is the center, right? Austin Carson (29:52): And so again, it's a way to take something that you still, and again, in my view, this is ideally how this should all pretty much take place, right? I'm not even being sly or anything. I'm just like, yeah, this is what I think should happen and I think it's helpful. So maybe try that. Um, but you're able to remove that kind of top-level barrier and tie it back to like, Hey, we're gonna be able to put one of these things in Alabama. We're gonna put one of these things here or there, there or there. And it's their interconnection of them that makes it valuable, but it also needs to have some substantial tie to everybody or it doesn't accomplish the purpose. I have some disagreements about that, with people I like and respect and that's okay. So take away from my can't please everybody thing, but the core objective, right? Austin Carson (30:31): The core mission is still shared and my differences are just based on our work and our research, but those are differences because we kind of operate from that shared victory standpoint. It's a lot easier to discuss, right? And because we are a 5 0 1 C three, and this is again my personal whatever, it's not you have to do this, it's just a slight change in strategy. But especially because we are a c3, it makes it a lot easier to be like, these opinions are based upon this stuff and we're happy to change 'em. If you guys have a difference of opinion, that's reasonable to us. You know, and we're just trying to hone in on that what works best for, for x, y, z thing I've been through. Um, and again, we find that to be not only effective from an advocacy or political maneuvering standpoint, but we also find it effective from just a goodwill perspective, right? Austin Carson (31:23): And a willingness, uh, for folks to, to collaborate with us, um, and for ourselves to get smarter, to be honest. I mean, have, being able to have that posture itself is a, um, get another video up for y'all. I know you're gonna like it, but, um, being able to just have that perspective means everywhere we go, we learn something that improves upon what we're doing. And again, it's not that everybody doesn't do that, but we have the luxury of not having 10 K's to have to file, you know, and shareholders to have to answer to. We have a board, but you know, shareholders have to answer to you outside of being mission-driven. And at the same time, right? The luxury that is like taking that luxury for us doesn't mean that we aren't helpful to those other entities as well. Because our mission and our goal are inherently designed to be a public and private cooperative, right? Austin Carson (32:16): I mean, one of our tenants is like, everybody should get paid. And I, I mean that's sprung out of, you know, the fact that certain programs were pretty much just farming out work and not paying people, but we think everybody should get paid. So it's like whatever, whoever wants to work and collaborate on this, in my view, it shouldn't be like a burden you have to carry. It should be an exciting opportunity, right? So I don't think any corporations are mad at me for being like, Hey, if you come up with a good plan, this is all a hundred million dollars worth of stuff and it helps people out in a way that is like demonstrable reasonable and you're, you know, collaborative and not trying to have a kind of seized thing here. That's awesome. We'd love for that to happen. We'd love to talk about how great it is, you know? Austin Carson (32:59): And so for us, it does ultimately come down to kind of the groundwork. I think that's what's valuable to pretty much the entire universe of stakeholders. It's having not just the, not just kind of the bank of people that are helpful, but also the context and the ability to, you know, cut through to what's interesting to folks in a way, in a way that honestly outside of some kind of national trade associations, you know, I don't think you see, And even in that, even in those instances, we still get kind of a unique take on it because it is very community-centric, right? Like we're trying to see how the whole thing can benefit and where the, you know, I mean some of the issues are incredibly predictable and the solutions aren't easy and you know, there's mostly historical, historical problems, but we try to see what we can do on everything from, you know, the super low hanging fruit. Austin Carson (33:53): Wow. You know, Shell is doing a project to do, uh, like predictive maintenance and these students at this community college are doing the same work. You guys should just pair together and we should figure out how to get that push through to the really, really big picture stuff. You know, it kind of opens that opportunity. So I'm gonna briefly drop into a video to kind of demonstrate, you know, at least for us with this looks like, um, in practice. So this is our second AI Across America event in Chicago. And you will see, right? You know what, just cuz I love you guys. I'm gonna start with one and then I'll take a vote for objection and then show you the other one. So here's from our kickoff at Stanford, right? And the following event was in South Side of Chicago, right? So I think it's pretty clear the contrast that's involved there, but you know, if you go to Stanford and try to see like what has been done to address the issues that were pretty well aware of in tech, right? Austin Carson (34:55): And what happened at the beginning, how did things get screwy, right? How did people fail? How do they succeed? And then it's kind of like, you know, what's been the experience of people in Southside Chicago and how can they, what's, what's being done that's affected what's good? And then like what are the presumptions of others who would be helpful that are incorrect and how can we help rectify those and how can we help team and pair people together and have some educational content to, to get it so that it is jointly functional. So we'll start with, you know, the Stanford side of things. All right, so we'll start with Stanford and then like I said, if nobody objects, we'll uh, we'll move over to Chicago, but all right, here we go. I gotta move. Video Speaker (35:51): SeedAI exists for the purpose of building AI ecosystems across the country in a community-driven way, focusing on underserved people and regions. Video Speaker (36:05): I think that America will be the true leader in ai. We're poised to do that. We have the capacity, we have the people that can lead and shape this effort. Video Speaker (36:22): So there are regions of California where we can beneficially put these engines that can drive innovation ecosystems in those, those regions that have not traditionally benefited from science and engineering research Video Speaker (36:43): Sooner. We provide concrete guardrails and guidance through congress and through industry activism. Better off we're developing this amazing technology. Lena Jensen (37:17): Austin, you're still on mute, <laugh>. Austin Carson (37:21): Okay. So anyways, I was gonna say, any objections to another 90 seconds on Chicago as contrast or any comments or thoughts on that? Like present how that's presented versus what I've been saying so far? I'll give you guys a minute since I think it takes 45 seconds for the questions to populate or something. I don't know little Lena, any thoughts questions as we're waiting? Lena Jensen (37:52): Yeah, one question that came to mind for me. You know, hearing, especially you talk about in the video, touching on that it's really a community-driven vision. I know that recently the White House really sub blueprint for sort of this AI bill of rights. And so would love to hear you speak to what does that mean for AI policy? Um, is that changing the way you're engaging with the executive branch? Any tips you've got for folks engaging with the executive branch around these more complex issues? Would love to hear your thoughts there. Austin Carson (38:25): Yeah, um, real quick and uh, 'cause I closed a window. Okay, so I do have a couple, a couple of thoughts on this, and I won't go on further unless somebody raises it. I'll say to the question of community, I mean to the one thing, and I wasn't doing anything on biometrics or didn't have et cetera, so I wasn't personally miffed, but you know, I think for the, Hey, thanks, Jessica. I think for the, um, I think for the question of community, you know, I got a general perception from a lot of people, especially folks in industry I guess who, who felt like the overall stakeholder requests some feedback maybe were not as much as they should have been and that the document was far more substantial than anticipated given the fact that it was, I think overall biometrics tables. Um, I would say, you know, another thing is that from the overall, so we wrote a letter, well on I said we supported a letter right? Austin Carson (39:25): That, um, you know, went out from the folks who had drafted the Nair and the National AI research Resource task force, which was passed into law last Congress. Um, and you know, kind of moving forward to say as the AI bill of Rights, I think as some folks were saying that the AI Bill of rights in the near were incompatible, that if we did a national AI research resource, it would, you know, inherently be captured by a large company or it would, you know, focus on the wrong types of research. There was just a pretty significant blowback. Um, and so, you know, there was a question of if you're, there are a lot of things that I think we largely agree on as kind of people like policy people maybe in general and, and AI policy people as a, as a subset of that. And, you know, I think technol technology should unquestionably be safe, right? Austin Carson (40:18): Unquestionably, be trustworthy, right? I think, you know, having a human adjudicator in all circumstances, I don't know, but you should at least have, you should have some recourse probably, you know, I mean I think these are all kind of broad principles that we either agree with or have like a substantially similar opinion. Um, but my view is without the environment, you know, okay, so the n uh, NIST has been working on an AI risk management framework for some time and holding hearings on it or having kind of like public sessions about it. And they had the 11th panel, they had a conversation about testing and validation, you know, and the overwhelming, overwhelming statement was context, context, context, all the thing that matters most is where is contextually where and how technology is applied, right? And so, you know, kind of any broad mechanism without that contextual dive in on things that are not obvious, right? Austin Carson (41:16): I think that it's like any transparent, you know, discrimination, the kind of housing, you know, discrimination cases that have come forward. I think that, again, there's existing law, there's straightforward stuff that just should be dealt with, right? But as you get deeper into like technology and context-specific things, we need an environment to test them and to have people participate in that testing, right? A big takeaway for me so far is not just as it like, oh, I can't, you know, I'm not negatively impacted by this, right? It's very unlikely that AI's gonna be like white men, you guys should go to jail more, get denied loans or any of that stuff, you know, But at the same time, it's not just that my life experience is different in terms of the risk, but it's also that I literally did not, you know, I didn't consider like 20% or 30% or 40% of the things people were concerned about and the specific reasons they were concerned about them, right? And so that goes to the contextual point and the second point, you know, along the lines of kind of safety that I think the, you know, the conversation should really at least impart somewhere focus in earnest is like on the same panel, they discuss kind of these frontier technologies, the large language models, which are like G B T three and um, a couple of other similar products that, that are put out by different companies, but the main, public-facing was open AI's G P T three, right? But this idea that like any of the Austin Carson (42:46): One last time, Austin Carson (42:50): There we go. Any of the large language models, they, they're like, I would never agree to test or validate one of those models, right? There's just too much. It's too crazy. We just can't know what's going on. And I feel like the, you know, then the rest of the panel's like Yeah, that's a good point. Yeah. I mean, one guy's like, well I guess we could, but it would just be obsolete by the time we finish validating it and then we kind of like, yeah, it sucks. Move on. I'm like, wait, so we're all just gonna say we don't know how to check this. Like insanely powerful technology outside of just running certain tests, right? And we're fine, I mean, not fine with it, but we're just like, whoa, yes, that's what we gotta do. And stuff is super useful and it's getting commercialized incredibly quickly, right? Austin Carson (43:31): And so super valuable. That value is not being spread as broadly as it could, even though it's inherently flexible no code and creative technology, right? So we're already like, that's also not really being explored until very recently, which I can get into if people care later. And then the final thing, you know, at least for me is that this is the very beginning. Like it feels like we're at the crazy part, but we are before the crazy part still, and it's happening really, really, really, really quickly. Right? If you wanna keep in touch with me, I'm gonna put this in a chat. You gotta, this is one of our board members, but he has the best like AI newsletter with kind of the, well that URL suck, and I also did it wrong, I think. Give it a sec. Yeah. Okay. Anyways, that's right, you always have to copy it. I feel like I idiot. Yeah. All right, lemme try this again. Right, right. You go, guys. Anyways, this is the best thing to, anyways, this is the best thing to keep up with, um, cuz it'll give you a sense for that. And he has some great presentations, one of which is on our YouTube channel. That's like, why is AI so crazy right now? And again, I wanna return back to the point that like, it's crazy. Like it's crazy. Austin Carson (44:50): Please understand it's crazy and that what you're doing to focus on this is super important. Please take it as seriously as you can and call me if I can ever help. But right outta time, we're gonna quickly hit this last video cause you guys are gonna love it, and then we're gonna close out the session. All right, here we go. Ready and go. Video Speaker (45:28): We have a once and a generation opportunity to rebalance the scales of technological power. And if we get this right, you'll be building the AI applications that define our lives rather than being subjected to them. Video Speaker (45:41): You are at the age where you can experiment the most and anything which looks daunting is very rewarding. Video Speaker (45:50): My hope is that together you'll identify opportunities to invest in the innovative potential of people and organizations within your communities, especially those that have been historically marginalized or overlooked. Video Speaker (46:04): We are building up our foundation at home and competing with our strengths. The diversity of background and experience represented across all 50 states. By providing the resource to reach each community to become competitive, the creative potential that we can unleash is unimaginable. Video (46:28): That was my interest because of the diversity of my district. Being a person of color myself, I just wanna make sure that my district is prepared for AI and not afraid of Video Speaker (46:42): Anytime you try to do something and make it take, you've gotta involve the people you are trying to affect in the deepest level possible. Austin Carson (46:55): All right, thank you for attending my screening. I appreciate it. Um, I dropped my email down there. Feel free to reach out if you have any thoughts, questions, or just interest in what we're working on or advice or quickly welcome. [post_title] => Building AI Across America [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => building-ai-across-america [to_ping] => [pinged] => [post_modified] => 2022-10-11 20:41:15 [post_modified_gmt] => 2022-10-11 20:41:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.quorum.us/?post_type=resources&p=7655 [menu_order] => 0 [post_type] => resources [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 1 [max_num_pages] => 0 [max_num_comment_pages] => 0 [is_single] => 1 [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => 1 [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 738c23e8335cebc577e682424f4c854e [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) )
!!! 7655
Blog

Building AI Across America

Building AI Across America

Austin Carson (02:03):

Um, but anyway, so I appreciate you. I appreciate everybody being here. I got a couple of things wound up for you, but I really would like for this to be as interactive as possible. Um, my monologue can get pretty boring after about five or 10 minutes, so I’d like to make sure this is tuned toward y’all. And I live in this world, so it’s easy for me to baseball, but my goal here is to start with, you know, what’s the background for me and my organization and why do I have any bearing on this? And then to some kind of general tenants that I have for communicating with folks, especially on emerging technologies. Uh, and then I’ll move into kind of an instance, um, of what we’ve worked on for kind of our project AI Across America, which is where the building AI across America name comes from and how, you know, that tries to tie together the things I’m describing to you towards the end of, you know, a plan we identified for building up AI capacity and innovation ecosystems.

Austin Carson (03:03):

So very quickly, like I said, Austin Carson, I’m the founder and president of C ai, um, and also work with a lot of other organizations on kind of the same front, especially around congressional education and c ai and organization I founded about a year ago. So we’re right in that kind of still peak, everything’s crazy time, uh, you know, around the idea of working directly with communities to help architect a plan for them to build an AI or an innovation ecosystem that supports AI research development, application testing, education, and workforce development. So I know that sounds like a really big bite to take outta something. Uh, and I think the important thing to tag on there is, you know, kind of the architecting and the ecosystem component because there are a ton of really good people that are working on each of these individual components for their respective circumstances in their communities.

Austin Carson (04:00):

And so to help bring those together and then show how they function in different environments and help provide kind of a blueprint or a plan for public officials, you know, NGOs, private sector folks to participate in that is, um, kind of lifts up that entire, that entire boat and gives you a way to help different people accomplish their objectives in kind of a super win-win scenario. Before that, I worked at Nvidia for three and a half years, which if you’re not familiar, they make, um, the computing platform that powers something. I mean, it was 92% last time I checked a year or two ago, but it powers somewhere between probably 80 and 90% of all artificial intelligence training. Um, and then I, you know, really worked closely with their technical staff and their kind of bigger picture business staff to stand up the government affairs department, uh, and think through, you know, what’s the primary, you know, what’s the primary added value for organization as a kind of big ecosystem player in ai?

Austin Carson (05:04):

And that is educating, right? Trying to give that experience a little bit more broadly. Uh, and then about, I would say probably a year before I left, maybe a year and a half, we started working on a project that really pushed aside a lot of the kind of misconceptions about the inevitability of AI development being or not being in certain environments, right? It was a big full spectrum investment and the computing and the data and the education at in Florida, the University of Florida, and then a lot of effort locally and with national, uh, like NSF investments to help mo like adjust the curriculum they created, move it to other parts of the university ecosystem, in particular for, you know, folks at, um, minority-serving institutions, historically black colleges and universities, community colleges, uh, creative institutions. I mean, a pretty broad spectrum of folks who, again, as a general matter, there’s a strong negativity approaching nihilism of, of kind of having this broad inclusion and representation.

Austin Carson (06:10):

And I loved it. <laugh>, obviously since I jumped off of a gravy train and decided to start a nonprofit, which wouldn’t advise it. Um, and so, you know, really trying to expand that out over time and think about how it’s valuable and actionable for kind of the broad group of stakeholders I was engaging with on a daily basis. And kind of tying that together with my knowledge from, you know, educating folks is how I got to kind of the place I am right now. And then before that, I worked for another nonprofit and was executive director for a bit TechFreedom. And then before that, I worked on the Hill for like six and a half years. And my last boss was, uh, Congressman McCall, who is now the vice chair and soon-to-be chair of the Congressional Artificial Intelligence Caucus. So that’s kind of my thread drawn all the way through this thing. Um, next I’ll get to some kind of general principles of how I thought about this, and some quick advice before that I will kind of expand on and give a reference point with 90 seconds of your life to watch a video. So apologize for making you do it, but I think it really ties stuff together in a cleaner way than I just did. Um, and the methodology behind it is again tuned toward what I’m about to kind of go through

Austin Carson (07:22):

Here. AI isn’t what you see in the movies, it’s not,

Austin Carson (07:27):

All right, here we go.

[Video Speaker] (07:28):

Magic or some alien force. Artificial intelligence is a reflection of people created by people. It already powers the devices and services we use every day. And with the pace of research, we’ll go from these small systems to complex technology that can change everything. But right now we have a problem. AI is moving fast in becoming so costly that public investments and small players can’t keep up because AI is a reflection of the people involved. We need to ensure that tech won’t harm people that don’t look, act, or sound like those who made it the good news. There’s an answer. By investing locally, we can grow a diverse generation of AI, dreamers, and creators from across the country. America is ready for action and seed AI is making it easier for everyone to work together. We’re curating an expert network of thinkers and doers who will help refine our work and support new initiatives. We’re creating policy perspectives and detailed actionable guidance for decision-makers around initiatives that contribute to AI access and more local capacity. Finally, we’re helping to build the programs themselves, working with communities to bring everything together. Seed AI is building a more inclusive representative future where we’ll solve problems and improve lives in ways we can’t get Imagine, learn more about how we’ll [email protected].

Austin Carson (08:57):

All right, well, hopefully, that wasn’t too onerous, but you know what I’ll jump to in a second about some of the communication and packaging of this stuff. Um, I’d like your feedback as we move on about, you know, what you like about it, dislike or have questions, um, and you know, how you take those thoughts in relationship to anything that I shared display from here on out. So going to, you know, how do you engage with Congress? How do we get from the kind of, you know, me working for Congressman McCaul just kind of randomly thinking about these things, and you know how I had to balance equities all the way up that ladder, right? The first step of it is you have to establish a baseline of knowledge in terms of where, like, where people are and where you have to get them.

Austin Carson (09:47):

And then you have to establish the incentives for that knowledge, right? And they and they range kind of the gamut. And for congressional staff, there certainly are, and for other government officials and honestly for executives at, at private companies, um, there are like kind of a clear incentive system in their mind that ties back to, you know, in a way that you may not necessarily comprehend. But each of them, especially at kind of this executive or operational, like high level or like decision making operational level, has that heuristic device, right? And there are commonalities that rung between them. And those are ultimately the best way for you to approach a complex topic, right? So again, start with their motivation. If you’re talking about congressional staff, I would argue that in this, you know, or even I think folks, maybe a percentage of the folks in perhaps policy positions and executive and possibly even private, I haven’t considered as much, but I think especially for congressional staff, you kind of cut into a couple different, uh, like operating frameworks depending upon for emerging technology work, kind of depending on where they sit and where their district is, right?

Austin Carson (11:02):

So the first is, you know, your true nerds, the people that like really love this stuff, really want to get into it. And any expert you bring around, they’ll be excited about it as you are. And I mean, the only way to really work with that category in the first place is to also really feel that way, right? And be really excited about where you’re gonna present to them, or at least understand what people that feel like that want, you know, and how they internalize it. But ultimately speaking with that group, it’s important to keep in mind that they still don’t actually probably have deep knowledge on the topic, right? Even if they love it, they’ve probably still been through like a couple of hearing prep, some meetings, and then, you know, keeping up with it along with the other 12 things they’re keeping up with, right?

Austin Carson (11:44):

So it’s still important to remember, and this is a mistake I made a lot before I remember that this is all I think about and nobody else, you know, especially the esoterica of it, and I’ll get to that in a second, but it’s hard to remember that. And then for the second group, you’ve got folks that are, you know, interested but not to the point where they’ll devote that kind of fraction of their mind, that congressional staff for any have, for any focused work, you know, and they like to learn, but I think they’re like, eyes are bigger than their stomach if that makes sense. And also a life I’ve lived. Um, and so, you know, there’s an interest in, you know, enough, like a functional knowledge of the technology at least to kind of add to that heuristic device. And then there’s an interest in how it’s beneficial, right?

Austin Carson (12:27):

How they can add it to something they’re working on for, you know, their bosses ambitions, the district ambitions, kind of what their ambitions are for their career. And you can see how this would cut in different directions for the other group. So I think it is especially strong given the divided attention and like strong motivators present in those legislative offices and some executive offices. Um, and then the final group is like, whatever, you know, I think it’s kind of like a, if you bring a compelling argument that has a clear benefit for, again, those three stated boss office self ambition, then I think that you can still kind of make that education. But it’s important to know that if you kind of amp up the line and try to bring them your director of research to dig into some stuff, it’s not necessary and possibly not even helpful.

Austin Carson (13:13):

And I would wind back to say for all of these groups, the packaging is super important in addition to the understanding of incentives, right? You have to consider the timeframe, the level of interest, and then again, to the list device, it’s their district, their boss themselves. And then for the overall operation, you can add into it, you know, what does the committee want? What does leadership want, right? What are the other people in their state doing? What’s happening in their state, right? And then again, what are they, what are they bringing home for any of those folks? Um, so, you know, kind of moving a little bit beyond that, what’s, what are mechanisms for action, right? And in fact, let me, I’ll go to some kind of general principles, and then if anybody has questions, drop ’em in the chat and I’ll try to answer those.

Austin Carson (14:04):

But otherwise, I’ll happily move on. Um, so, you know, the first thing I’ll say is, on one hand, AI and, you know, with the deep learning kind of revolution, right? But modern ai, while it feels like it’s been moving for a while, it’s moving fast, it’s still super nascent and it’s super, it’s super nascent in the sense that, you know, we’re on the like breakthrough point to some seemly, absurdly advanced technology that we barely understand, to be honest, right? And that the policies that we’ve really gone through and the industry that we’ve addressed in the past is about to fundamentally change in some ways by the advent of, um, I mean in particular large transformer models and, you know, more advanced reinforcement learning. And we can, you know, know, have some resources drop for technology or for education if anybody wants to dig deeper.

Austin Carson (14:56):

But we are like moving past this inflection point, and we are at Genesis, you know, uh, and to be honest, for that reason, I’m really thankful that you’re trying to learn about this or you’re interested in learning about it because it is very, very important. Uh, critically, I cannot overstate this. It’s not just that I love my idea that I went and jumped off a cliff for starting seed ai. It’s because I am firmly convinced that it is critical that we Saturday morning cartoons work together and get this right and do our best. You know, we literally need to, I, I’m not gonna be so histrionic to say the survive is a species, but it will not be enjoyable if we screw this one up. Um, and the second thing I’ll say I’m finding is that because you’re oftentimes stepping into a relative vacuum of knowledge, especially if you make a good faith effort to, you know, really figure out what they want and work with folks and educate them, you can to some extent overcome partisanship.

Austin Carson (15:58):

And you can either do that because you’re in a total vacuum, right? Where nobody’s really talking about this yet, right? Like, again, conversations about recommender models on social media sites have been going on for a while, but at the same time, that’s still a technology that’s like evolving super rapidly, right? And so the conversation about it and where you can stand on it changes. It doesn’t have to be just about, oh, they’re blocking me or somebody else on Twitter, right? But if you move a step beyond that and get into this frontier technology I was mentioning, there’s so much stuff nobody is in any way discussing outside of just kind of scratching at the edges of it. You know, there are so many things that could be identified for folks to work together on or like to do like some type of oversight or even job owning on, right?

Austin Carson (16:47):

For being honest. Um, that I think that you can to some extent get around the bipartisan, you know, kind of the horrible cynicism and partisan ranker. And then the kind of carry-on thing to that is because it is both nascent and because there are a ton, a ton of kind of opportunities to do what seems like smaller work, but is actually very significant work if that makes sense. Little provisions of law, little additions to how things are tested or evaluated or, you know, the expansive or lack of expansiveness in programs or, you know, who is considered in things, what agencies are involved, who as headcount. I mean, just things that seem more trivial than normal can have a really big impact in a way that is again, kind of just proportionate to the normal universe. And so those are kind of cohi, maru ways you can get at what are bigger issues without lighting the political, you know, the political torch.

Austin Carson (17:52):

Uh, and then I can feel, I got a couple more things I wanna say. Okay. So it is really, really important to never underestimate the empower the, um, power of a constituent connection. And I don’t even mean that in the regular sense of like, yeah, you have to bring in their constituent and your trade association, but I mean, finding the people on the ground that are doing the exact or as close the exact thing as what the member wants to do or would want to do, right? And like working out with them, how it relates to the district and what the opportunities are, and then kind of getting them excited to work with you on the thing you want to work on. And again, that’s not, it’s a, it’s a, it’s a heavier lift, it’s a more groundwork, it’s a pain, right? Um, but I mean, I, we’ve found super interesting that I had no idea thank you, Catherine, that I had no idea, uh, existed as we’ve moved around the country in AI across America, which I’ll return to in a moment.

Austin Carson (18:53):

Um, but super, super important. And so then let’s go to kind of what are the, you know, what are the action points that forward emerging technology conversation is particularly useful? Um, I wanna return back to the point about small things are much more meaningful than ever before. Um, finding out, you know, searching out, searching those things out are kind of your ultimate lever. Like I would say in an ideal world if you have the time and investment and a big enough nerd that loves this stuff, you could, you can get ahead of lobbying, right? Like, I’ve never, I feel like I, my lobbying percentage has never been above. Like I don’t, some 3%, I have no idea. Because the vast majority of the time, if you work out these concepts and figure out what needs to be done, it’s like the process runs after that and you don’t have to go back and do too much.

Austin Carson (19:47):

And, again, a lot of that is functionally based upon how much you do that legwork at the beginning to make sure that you’re in kind of this agreeable space. And ideally, you’re living in a win-win space where you’re able to address like a number of different, a number of different equities at the same time and, and understand how to approach things, package them, and, with intellectual honesty account for the different folks at play at the, at the very beginning. So that things that if reframed without kind of the incentive structure in place would be politically unable, right? Like if the conversation and the table setting of the thing was captured, it wouldn’t really work out if somebody decided to blow it up. But if you can make it just, it’s kind of useful, win-win core, and I’ll give an example of this in a second, but like useful win-win core, then you can move forward a little bit from there.

Austin Carson (20:44):

Uh, and let’s see, anything else? Oh, and then, so I would say two of the most useful things to do when you need to approach at either a more extended education phase or a higher level of education on a particular topic, and this is no surprise to any of you I’m sure, but I would say is disproportionately useful to invest in. And investment in part is what I think gets lost a lot of times. But like, invest in the coordination entity, right? So like be intentional about working with the AI caucus or be intentional about working with the Congressional Tech staff association, right? Think about and as much as you can about how you can kind of add value in a way that does also address your concerns. Cuz the more that you add value, and this is a lot more than just having a meeting and being like, please use this as a resource, Right?

Austin Carson (21:39):

Which I just legitimately banned people at a video from ever saying, because I’m like, Dude, I heard that legitimately like 50 times a week. It is semantic satiation. It’s like when you say the word over and over again, it loses all meaning after the 10th time, You know? Then I think at this juncture it is rote and you do wanna show a level of intentionality, especially when it’s an issue that people are scared to talk about, right? Like, they’re embarrassed. I was embarrassed, I was the AI guy and I was like, Oh no, I don’t know anything compared to these guys. I’m just gonna try to seem smart for a second. You know, which puts you in a bad spot, right? So I think kind of demonstrating that, being proactive about it is one of the best ways. And again, being, having stuff over time that, you know, show people that you’re bringing something that’s new and has an actionable edge to it.

Austin Carson (22:30):

And so I think those are kind of my, Oh, and then the final point on that is to also focus on supporting kind of the other entities that do educational work. Um, and figuring out how you do not capture or like interject or anything, but legitimately, is there a specific value that we can come in and inject? Is there something that we can do to feed in that answers questions that they are posing or kind of file, you know, is a new development? It’s very interesting. It’s something that is impactful and folks should know about, right? I think there’s one side of this game where people are just kind of going into third-party validators and just being like, Hey, we think this is ideologically aligned with you. You wanna check it out briefly and then just publish it, You know? And then I think there’s the flip side where again, to the pre, you’re not lobbying where it’s like you’re helping to look at what things are being established and how you can make the foundation of a crackable for folks, you know?

Austin Carson (23:29):

Um, and this is the thing I used to always try to explain whenever you’re talking to lawyers internally, it’s like the important thing here is not like you need to be accurate, right? But the important thing here is that it’s crackable and then accurate, you know, people have to, to be able to really grasp what it is that you’re, that you’re talking about and what it means for them quickly. Whereas if you roll down a big list of stuff off of a marketing document or a one-pager, everyone’s gonna zone out, you know, unless it’s just a super valuable thing that’s included in that one-pager marketing document that they need. Like folks are like, All right, nice, well we did our favor for that guy. All right, nice. We did our favor for that guy. You know? And I think that’s a place that folks kind of live.

Austin Carson (24:16):

Um, and so from there, if nobody has anything they wanna pop into the chat, I will kind of move on to, you know, our exemplar of, you know, this for us and our try at the ultimate win-win. You know, it was my, my real shot into maybe we can make everybody happy, you know, And I’m not that naive, but we can make a lot of people happy, I think. So coming back to the original premise, you know, of c ai, uh, yeah, Connor, I feel coming back to the original premise of AI and some of the things that I’ve discussed, you know, we had so many ultimately requests from congressional offices, um, while I was still at a video and trying to figure out like, how can I package this up for folks in a way that’s again, really focused on burning resources on like them, even if it’s not helpful to any individual entity, right?

Austin Carson (25:09):

And, you know, and it really comes to the fact of resourcing, right? And the determination to be intentional about things, to like demonstrate the value you have to invest in it yourself. And as a, you know, if you, as a larger corporation, if you have a nonprofit arm or something, that could be a great place to do that, to demonstrate the investment and try to be fair-minded about it. But, you know, if, if not, or if it’s kind of a standalone thing, you know, coming in as, Hey, we’ve all been talking for a while about y’all wanting kind of like the outputs of this conversa, you know, this conversation of how can we make this work for your district, your constituents, you know, how, what are the base components? How can we start stringing ’em together? What’s the objective? And, you know, move towards, okay, I need staff to work this up.

Austin Carson (25:56):

I need ways to package it, right? I need ways to make it, to make people understand why it would be immediately valuable to them. And for the folks that, you know, participate to be able to, you know, turn around and have something solid stand on. Uh, and so we did, you know, effectively a kickoff event, laid the table with or set the table kind of with, you know, what are the main things we’re talking about? National AI research resource, the NSF new technology directorate, who are the folks involved? You got some government officials, some, you know, private sector folks. You’ve got, you know, the folks that have written some of these aspects of it. You’ve got, you know, different analysts across the line. We have some students, we have some startups, you know, and we’re again, trying to cover the broad universe of people who are impacted as you build out or would build out these kinds of resources for folks to get involved.

Austin Carson (26:50):

And so, you know, in laying that groundwork and then stating at this event very clearly, like there is an opportunity now that has not existed anytime in the recent past, right? There is effort and now mostly or partially halfway successful effort, right? To massively invest in folks building, testing, prototyping, researching AI, and applying it to what you know, their lives as they are already. And now we can really sprint at it and there’s a lot of effort for the inclusion and adaptation of people’s different strengths and circumstances. And then from there, it’s about, you know, practically stringing it together. The first thing, which is the, you know, to the point about kind of packaging and finding the solution that works for folks around the board, but is also very necessary is, you know, looking at this question of, um, you know, safety and testing and inclusion and diversity representation, um, application level stuff, um, getting community colleges involved, right?

Austin Carson (27:59):

As opposed to just R one s, having the R one s have an opportunity, the research top research institutions have an opportunity to work with the companies and community colleges, right? There are all these questions that are kind of living there. And there’s one, you know, kind of encapsulated answer, which is, you know, the idea of like a test bed and working environment, You know, it’s a place where some like high-level sensitive research can happen, where you can take things like large language models, put them in context and research them, have the broad universe of people from like NGOs to safety standards folks to company people to different governmental bodies, you know, kind of participating in this in some way. And then as a collateral benefit, you have the same infrastructure, the same computing, much of the same data, and many of the same people that you would require to do that aspect of it are also incredibly valuable for application and commercialization and public-private partnerships, right?

Austin Carson (28:53):

And so the, you know, having it together like that keeps it from being something that’s kind of charge in one direction or the other and gives you both haves of that coin, right? And there’s some conversation we’ve had about that as like, you know, a working model, you know, and you have a secondary component of, you know, the national AI research resource, which is this big investment in shared computing data instead of having to worry about fighting for x billions of dollars for, you know, something as contentious as, uh, and you can see some work, we did this on our website, but for something, you know, potentially contentious like a, you know, Jedi contract ish thing too, you know, instead we can turn around and say, well, you’ll need a central piece. But if we can build out all these kinds of working environments and testbeds through the other stuff coming through chips and science, they can form, you know, the majority of that kind of compute and which you need is the center, right?

Austin Carson (29:52):

And so again, it’s a way to take something that you still, and again, in my view, this is ideally how this should all pretty much take place, right? I’m not even being sly or anything. I’m just like, yeah, this is what I think should happen and I think it’s helpful. So maybe try that. Um, but you’re able to remove that kind of top-level barrier and tie it back to like, Hey, we’re gonna be able to put one of these things in Alabama. We’re gonna put one of these things here or there, there or there. And it’s their interconnection of them that makes it valuable, but it also needs to have some substantial tie to everybody or it doesn’t accomplish the purpose. I have some disagreements about that, with people I like and respect and that’s okay. So take away from my can’t please everybody thing, but the core objective, right?

Austin Carson (30:31):

The core mission is still shared and my differences are just based on our work and our research, but those are differences because we kind of operate from that shared victory standpoint. It’s a lot easier to discuss, right? And because we are a 5 0 1 C three, and this is again my personal whatever, it’s not you have to do this, it’s just a slight change in strategy. But especially because we are a c3, it makes it a lot easier to be like, these opinions are based upon this stuff and we’re happy to change ’em. If you guys have a difference of opinion, that’s reasonable to us. You know, and we’re just trying to hone in on that what works best for, for x, y, z thing I’ve been through. Um, and again, we find that to be not only effective from an advocacy or political maneuvering standpoint, but we also find it effective from just a goodwill perspective, right?

Austin Carson (31:23):

And a willingness, uh, for folks to, to collaborate with us, um, and for ourselves to get smarter, to be honest. I mean, have, being able to have that posture itself is a, um, get another video up for y’all. I know you’re gonna like it, but, um, being able to just have that perspective means everywhere we go, we learn something that improves upon what we’re doing. And again, it’s not that everybody doesn’t do that, but we have the luxury of not having 10 K’s to have to file, you know, and shareholders to have to answer to. We have a board, but you know, shareholders have to answer to you outside of being mission-driven. And at the same time, right? The luxury that is like taking that luxury for us doesn’t mean that we aren’t helpful to those other entities as well. Because our mission and our goal are inherently designed to be a public and private cooperative, right?

Austin Carson (32:16):

I mean, one of our tenants is like, everybody should get paid. And I, I mean that’s sprung out of, you know, the fact that certain programs were pretty much just farming out work and not paying people, but we think everybody should get paid. So it’s like whatever, whoever wants to work and collaborate on this, in my view, it shouldn’t be like a burden you have to carry. It should be an exciting opportunity, right? So I don’t think any corporations are mad at me for being like, Hey, if you come up with a good plan, this is all a hundred million dollars worth of stuff and it helps people out in a way that is like demonstrable reasonable and you’re, you know, collaborative and not trying to have a kind of seized thing here. That’s awesome. We’d love for that to happen. We’d love to talk about how great it is, you know?

Austin Carson (32:59):

And so for us, it does ultimately come down to kind of the groundwork. I think that’s what’s valuable to pretty much the entire universe of stakeholders. It’s having not just the, not just kind of the bank of people that are helpful, but also the context and the ability to, you know, cut through to what’s interesting to folks in a way, in a way that honestly outside of some kind of national trade associations, you know, I don’t think you see, And even in that, even in those instances, we still get kind of a unique take on it because it is very community-centric, right? Like we’re trying to see how the whole thing can benefit and where the, you know, I mean some of the issues are incredibly predictable and the solutions aren’t easy and you know, there’s mostly historical, historical problems, but we try to see what we can do on everything from, you know, the super low hanging fruit.

Austin Carson (33:53):

Wow. You know, Shell is doing a project to do, uh, like predictive maintenance and these students at this community college are doing the same work. You guys should just pair together and we should figure out how to get that push through to the really, really big picture stuff. You know, it kind of opens that opportunity. So I’m gonna briefly drop into a video to kind of demonstrate, you know, at least for us with this looks like, um, in practice. So this is our second AI Across America event in Chicago. And you will see, right? You know what, just cuz I love you guys. I’m gonna start with one and then I’ll take a vote for objection and then show you the other one. So here’s from our kickoff at Stanford, right? And the following event was in South Side of Chicago, right? So I think it’s pretty clear the contrast that’s involved there, but you know, if you go to Stanford and try to see like what has been done to address the issues that were pretty well aware of in tech, right?

Austin Carson (34:55):

And what happened at the beginning, how did things get screwy, right? How did people fail? How do they succeed? And then it’s kind of like, you know, what’s been the experience of people in Southside Chicago and how can they, what’s, what’s being done that’s affected what’s good? And then like what are the presumptions of others who would be helpful that are incorrect and how can we help rectify those and how can we help team and pair people together and have some educational content to, to get it so that it is jointly functional. So we’ll start with, you know, the Stanford side of things. All right, so we’ll start with Stanford and then like I said, if nobody objects, we’ll uh, we’ll move over to Chicago, but all right, here we go. I gotta move.

Video Speaker (35:51):

SeedAI exists for the purpose of building AI ecosystems across the country in a community-driven way, focusing on underserved people and regions.

Video Speaker (36:05):

I think that America will be the true leader in ai. We’re poised to do that. We have the capacity, we have the people that can lead and shape this effort.

Video Speaker (36:22):

So there are regions of California where we can beneficially put these engines that can drive innovation ecosystems in those, those regions that have not traditionally benefited from science and engineering research

Video Speaker (36:43):

Sooner. We provide concrete guardrails and guidance through congress and through industry activism. Better off we’re developing this amazing technology.

Lena Jensen (37:17):

Austin, you’re still on mute, <laugh>.

Austin Carson (37:21):

Okay. So anyways, I was gonna say, any objections to another 90 seconds on Chicago as contrast or any comments or thoughts on that? Like present how that’s presented versus what I’ve been saying so far? I’ll give you guys a minute since I think it takes 45 seconds for the questions to populate or something. I don’t know little Lena, any thoughts questions as we’re waiting?

Lena Jensen (37:52):

Yeah, one question that came to mind for me. You know, hearing, especially you talk about in the video, touching on that it’s really a community-driven vision. I know that recently the White House really sub blueprint for sort of this AI bill of rights. And so would love to hear you speak to what does that mean for AI policy? Um, is that changing the way you’re engaging with the executive branch? Any tips you’ve got for folks engaging with the executive branch around these more complex issues? Would love to hear your thoughts there.

Austin Carson (38:25):

Yeah, um, real quick and uh, ’cause I closed a window. Okay, so I do have a couple, a couple of thoughts on this, and I won’t go on further unless somebody raises it. I’ll say to the question of community, I mean to the one thing, and I wasn’t doing anything on biometrics or didn’t have et cetera, so I wasn’t personally miffed, but you know, I think for the, Hey, thanks, Jessica. I think for the, um, I think for the question of community, you know, I got a general perception from a lot of people, especially folks in industry I guess who, who felt like the overall stakeholder requests some feedback maybe were not as much as they should have been and that the document was far more substantial than anticipated given the fact that it was, I think overall biometrics tables. Um, I would say, you know, another thing is that from the overall, so we wrote a letter, well on I said we supported a letter right?

Austin Carson (39:25):

That, um, you know, went out from the folks who had drafted the Nair and the National AI research Resource task force, which was passed into law last Congress. Um, and you know, kind of moving forward to say as the AI bill of Rights, I think as some folks were saying that the AI Bill of rights in the near were incompatible, that if we did a national AI research resource, it would, you know, inherently be captured by a large company or it would, you know, focus on the wrong types of research. There was just a pretty significant blowback. Um, and so, you know, there was a question of if you’re, there are a lot of things that I think we largely agree on as kind of people like policy people maybe in general and, and AI policy people as a, as a subset of that. And, you know, I think technol technology should unquestionably be safe, right?

Austin Carson (40:18):

Unquestionably, be trustworthy, right? I think, you know, having a human adjudicator in all circumstances, I don’t know, but you should at least have, you should have some recourse probably, you know, I mean I think these are all kind of broad principles that we either agree with or have like a substantially similar opinion. Um, but my view is without the environment, you know, okay, so the n uh, NIST has been working on an AI risk management framework for some time and holding hearings on it or having kind of like public sessions about it. And they had the 11th panel, they had a conversation about testing and validation, you know, and the overwhelming, overwhelming statement was context, context, context, all the thing that matters most is where is contextually where and how technology is applied, right? And so, you know, kind of any broad mechanism without that contextual dive in on things that are not obvious, right?

Austin Carson (41:16):

I think that it’s like any transparent, you know, discrimination, the kind of housing, you know, discrimination cases that have come forward. I think that, again, there’s existing law, there’s straightforward stuff that just should be dealt with, right? But as you get deeper into like technology and context-specific things, we need an environment to test them and to have people participate in that testing, right? A big takeaway for me so far is not just as it like, oh, I can’t, you know, I’m not negatively impacted by this, right? It’s very unlikely that AI’s gonna be like white men, you guys should go to jail more, get denied loans or any of that stuff, you know, But at the same time, it’s not just that my life experience is different in terms of the risk, but it’s also that I literally did not, you know, I didn’t consider like 20% or 30% or 40% of the things people were concerned about and the specific reasons they were concerned about them, right? And so that goes to the contextual point and the second point, you know, along the lines of kind of safety that I think the, you know, the conversation should really at least impart somewhere focus in earnest is like on the same panel, they discuss kind of these frontier technologies, the large language models, which are like G B T three and um, a couple of other similar products that, that are put out by different companies, but the main, public-facing was open AI’s G P T three, right? But this idea that like any of the

Austin Carson (42:46):

One last time,

Austin Carson (42:50):

There we go. Any of the large language models, they, they’re like, I would never agree to test or validate one of those models, right? There’s just too much. It’s too crazy. We just can’t know what’s going on. And I feel like the, you know, then the rest of the panel’s like Yeah, that’s a good point. Yeah. I mean, one guy’s like, well I guess we could, but it would just be obsolete by the time we finish validating it and then we kind of like, yeah, it sucks. Move on. I’m like, wait, so we’re all just gonna say we don’t know how to check this. Like insanely powerful technology outside of just running certain tests, right? And we’re fine, I mean, not fine with it, but we’re just like, whoa, yes, that’s what we gotta do. And stuff is super useful and it’s getting commercialized incredibly quickly, right?

Austin Carson (43:31):

And so super valuable. That value is not being spread as broadly as it could, even though it’s inherently flexible no code and creative technology, right? So we’re already like, that’s also not really being explored until very recently, which I can get into if people care later. And then the final thing, you know, at least for me is that this is the very beginning. Like it feels like we’re at the crazy part, but we are before the crazy part still, and it’s happening really, really, really, really quickly. Right? If you wanna keep in touch with me, I’m gonna put this in a chat. You gotta, this is one of our board members, but he has the best like AI newsletter with kind of the, well that URL suck, and I also did it wrong, I think. Give it a sec. Yeah. Okay. Anyways, that’s right, you always have to copy it. I feel like I idiot. Yeah. All right, lemme try this again. Right, right. You go, guys. Anyways, this is the best thing to, anyways, this is the best thing to keep up with, um, cuz it’ll give you a sense for that. And he has some great presentations, one of which is on our YouTube channel. That’s like, why is AI so crazy right now? And again, I wanna return back to the point that like, it’s crazy. Like it’s crazy.

Austin Carson (44:50):

Please understand it’s crazy and that what you’re doing to focus on this is super important. Please take it as seriously as you can and call me if I can ever help. But right outta time, we’re gonna quickly hit this last video cause you guys are gonna love it, and then we’re gonna close out the session. All right, here we go. Ready and go.

Video Speaker (45:28):

We have a once and a generation opportunity to rebalance the scales of technological power. And if we get this right, you’ll be building the AI applications that define our lives rather than being subjected to them.

Video Speaker (45:41):

You are at the age where you can experiment the most and anything which looks daunting is very rewarding.

Video Speaker (45:50):

My hope is that together you’ll identify opportunities to invest in the innovative potential of people and organizations within your communities, especially those that have been historically marginalized or overlooked.

Video Speaker (46:04):

We are building up our foundation at home and competing with our strengths. The diversity of background and experience represented across all 50 states. By providing the resource to reach each community to become competitive, the creative potential that we can unleash is unimaginable.

Video (46:28):

That was my interest because of the diversity of my district. Being a person of color myself, I just wanna make sure that my district is prepared for AI and not afraid of

Video Speaker (46:42):

Anytime you try to do something and make it take, you’ve gotta involve the people you are trying to affect in the deepest level possible.

Austin Carson (46:55):

All right, thank you for attending my screening. I appreciate it. Um, I dropped my email down there. Feel free to reach out if you have any thoughts, questions, or just interest in what we’re working on or advice or quickly welcome.