In this week’s podcast, we’re exploring the three trends poised to redefine how we design, simulate, and deploy complex systems in 2026 and beyond: agentic AI and the model context protocol for tool integration, hybrid terrestrial and non‑terrestrial wireless networks, and moving AI onto embedded hardware using virtual sensing and reduced‑order models. My guests, Seth DeLand and Houman Zarrinkoub from MathWorks, and I explore the practical problems in engineering workflows driving interest in agentic AI, the engineering challenges of hybrid terrestrial and non‑terrestrial wireless networks, and how smaller AI models can enable faster simulation and real‑time system responses.
Links for February 6, 2026
MATLAB and Simulink for Artificial Intelligence
MATLAB and Simulink for Wireless Communications
Mercedes-Benz Research & Development India Deploys Embedded AI for Cabin Comfort
Click here to check out the Fish Fry Archive.
Click here to subscribe to Fish Fry via Podbean
Click here to get the Fish Fry RSS Feed
Click here to subscribe to Fish Fry via Apple Podcasts
Click here to subscribe to Fish Fry via Spotify
Transcript
Hello there everyone. Welcome to episode number 668 of this here electronic engineering podcast called Amelia’s Weekly Fish Fry. brought to you by ejournal.com and written, produced, and hosted by me, Amelia Dalton. In this week’s podcast, we’re exploring three trends posed to redefine how we design, simulate, and deploy complex systems in 2026 and beyond. That’s right, my friends. We’re talking about AI agents, hybrid wireless networks, and the shift of AI directly onto embedded hardware. My guests are Seth DeLand and Houman Zarrinkoub from MathWorks. Seth, Houman and I explore the practical problems in today’s engineering workflows that are driving interest in agentic AI rather than standalone language models. the engineering challenges that have emerged as non-terrestrial and terrestrial wireless networks begin operating as a single system and how AI reduced order models can change what’s possible in simulation and system level design. So without further ado, please welcome Seth and Human to Fish Fry. Hi Humman, thank you so much for joining me. My pleasure. And hi Seth, thank you for joining me. Yeah, my pleasure as well. Excellent. Okay, so Seth, talk to me about the practical problems in the engineering workflows you’re seeing these days that are driving interest in Agentic AI rather than standalone language models. Yeah. So large language models or you know the AI models we’ve been interacting with for the last couple years you know they’re great at things like text generation but this real interest in agentic systems is really driven by AI agents that can execute tools in code and the ideas there is you know if you don’t have an AI agent that’s capable of connecting to the different tools that you’re using or working with code then you know engineers might spend a lot of time moving data between different tools instead of actually solving the problems. And so the focus with Agentic AI has been on how we can automate some of these repetitive steps like actually creating files or editing files or updating things as well as you know how do you react to error messages if the AI generates code that isn’t completely perfect. And so it’s really about just how we integrate AI with common engineering workflows as opposed to viewing them as just a a chatbot. That makes sense. So Seth, why does Agentic AI require standardized context sharing and what problem does the model context protocol actually solve? Yeah, so these AI agents once you start using these and hook them up to different tools and things like that, you realize pretty quickly they needed a consistent way to interpret code or data or prompts. And so you don’t want to be connecting your agentic AI tool to all these other tools in an ad hoc way. You know, say I’m a an engineer working on a control system. I might want my agent hooked up to mat lab, but then maybe I also want it hooked up to GitHub. Maybe my code’s on there. You know, I want it to send me some email notifications as it makes progress. And so I want to hook it up to email client as well. And so like all of these different things can be viewed of in an agentic sense as tools. And gee, wouldn’t it be great if I had a standardized way to to hook all of those up? And so that’s where model context protocol really comes in. It was created to be a standardized way for these tools to exchange the different text and and context that needs to be interchanged between the large language model and these different tools. And so it’s really, you know, what’s kind of enabling all of these multi-tool workflows even across different software programs. If I may chime in Seth if you don’t mind. My customers in wireless communication side they also talk about this notion of initial conditions and repeatability and testability. If you can communicate the context and you can replicate an experiment with the LLMs based on the initial same starting point that could make the verification process easier. Is that something that you heard about for your customers too? Yeah, definitely. I think that’s a really important point as well. I think the more context that we give these agents, the more capable they are and the better results that they can give. And so I think of things as context, you think of things as initial conditions, but I think we’re speaking the same language there. So, Seth, how could AI agents and standardized protocols change how engineers interact with models across organizational or even tool boundaries? Yeah. So, this is a really good question and historically it’s been a real challenge across engineering. Many years ago, I I used to work on design optimization types of problems. And you know, with design optimization, you have all of these challenges associated with bringing together different components of a a model that you need to evaluate in order to optimize it. And you know, those different components of that model might even be implemented in in different software programs or by different teams. And so there was always this challenge associated with how do you integrate all of those together so that you can create this loop that you can hand over to a an optimizer so that you can actually do some optimization and come up with the best design. And that’s really kind of some of the potential with Agentic AI and and using some of these standard protocols is is now you can have an agent that is able to connect [clears throat] with the work that you’re doing, the engineering work that you’re doing across different programs and you can start to have it coordinate these different simulations to get results from one tool to pass those over to the next tool. And it really reduces a lot of that manual translation or work that you might need to do between formats and teams. you’re often relying on the knowledge of the large language model and the capability of it to do that connectivity or that glue work together for you which that glue work is it was often necessary but it’s not the thing that engineers want to be working on they really want to be working on managing and and hitting the deliverables for their projects and so I think it’s really exciting the potential when you start to think about having an AI agent that can connect all of these different things together be aware of the project goals the project constraints and let it loose in some ways on optimizing the design. So, Human, talk to me about the engineering challenges that are emerging as non-aterrestrial and terrestrial wireless networks begin operating as a single system. It’s a very exciting times in wireless communication for that convergence between non-aterrestrial and terrestrial. And the main problem is essentially scope and scale. I give you an example. When you are uh walking around with your cell phone connecting to the nearest mobile base station, your distance even on the cell edge is about 1 2 kilometers maximum. When you are talking to this modern new low earth orbit satellites that going to provide non-terrestrial connectivity, the distance is 500 to 1,000 kilometers. about 1,000 times more. So that changes the dynamic in terms of the delay or latency and the speed by which you can estimate sense and respond to it. So main engineering challenge there is system engineering meaning how do I make these two completely different systems of different scales operate smoothly together. One thing that exacerbates things is the notion that there is no non-terrestrial network without these low earth orbit satellites that are not geostationary. Each of these satellites of the constellation rise from the horizon and set after 2 three minutes. So the central system that communicates between the rising and falling NTN and the terrestrial network has to do this handoff thousands of times. So that is essentially the scope and scaling problem of merging of these two uh different technologies. All right. So from an RF and system design perspective, what does hybrid NTN or terrestrial network connectivity require that past networks did not? The notion that was nice to have in the past wireless communication technologies but is a must-have in this new context is notion of using multiple frequencies in the same device because the frequencies operating in the terrestrial network and not terrestrial networks are not same for for obvious reasons. And therefore this ability to switch the context between one to the next and make sure as a result of this complete dynamic nature you don’t lose anything is very important. One more thing that is very important and that really defines how people look to us at math forks performing solution is the notion of channel modeling. Everything in the wireless communication of future is dynamic. You sense things, you gather situational awareness and you respond accordingly. You know the distance between the the devices and you change the frequency. You change the allocation of bits. You change the beam forming and that does a lot in terms of computational complexity you have to add to the system as a result of these new RF requirements that were not there before. That makes sense. Now, why is AI moving directly onto embedded hardware instead of staying in the cloud? One answer, latency, latency, latency. You can put the AI engine in the cloud. But most of the operations at the edge will have to communicate to the cloud, incur latency or delay, get it back. By the time you do that, the situation in the actual transmission to your devices have changed. So you’re not gaining all the advantages of AI if it’s not real time by putting some AI computation capability and hardware on the base station on the satellite itself on the edge they call it edge of the network instead of the core of the network and cloud you gain that nimble and agile ability to respond to fast changing situational environment that is the main reason. So why is virtual sensing becoming an important embedded AI application and what problems does it solve? Yes, in the language of the people who are working on 5G and 6G, this technology is called integrated sensing and communication. Historically, wireless communication has been very good at providing service based on average characteristics of the channel medium. you know we we know where you are approximately we know what you need but the future the 6G and so on they want to customize to you and as you change we change communication to you so there’s a lot of sensing required for doing that if you put all the sensors in the edge device in the cell phones in the base station the satellite that adds a lot of cost also lots of essentially requirements on the physical device but if you do virtual sensing meaning using AI models to kind of infer what the sensor would sense that is all software oriented much lower cost centrally managed that’s why this notion of physical sensing versus virtual sensing has become an issue for designing the future wireless sensing and communication system if I could add a little bit to that Human I think one important distinction to make you know earlier we talked about the impact that general generative AI and agentic AI is having. Those AI models tend to be large language models with billions to trillions of parameters and that is very different from the types of AI models that we see being used in these embedded AI or virtual sensing types of systems. I think they’re much smaller. They are much more optimized. You know, oftentimes this is everything from traditional machine learning models like decision trees, even linear regression in some cases can be a suitable virtual sensor to deep learning models like convolutional neural networks or LSTMs. But these are models that are multiple orders of magnitude smaller, but they’re often very optimized and you know using specific data for these embedded types of applications. And so the power, the speed, the memory requirements are very different. So how do AI based reduced order models change what’s possible in simulation and system level design? So if I want to take a stab at it, what set just described is exactly at the core of this. So the amount of parameters and the size of the AI models required for wireless communication on the edges on the base station and so on are much less in order than the LLM’s the large language model and it’s because they are doing specific tasks everything that wireless communication does at the edge is about uh allocating and managing resources and those are much more as said mentioned machine learning type optimization type algorithm of much less need for parameters and hence the computational footprint than the large element models. So as such the availability of this reduced model makes it possible to contain it reasonably on an edge device on embedded hardware. That’s the number one and number two is that although the training aspect of these model takes a long time lots of memory and so on but if you train these model and implement on the edge of the wireless system in a very efficient embedded hardware the inference is much less computationally expensive therefore implementable so that’s how these things are very applicable to a wireless communication at the edge yeah and then I would Maybe add a little bit to that. Where I’ve seen reduced order models really gaining popularity are in engineering design problems where the physics are either extremely complicated or very computationally intensive to model. So often times you know we might be building some type of control algorithm say for like a an HVAC system in a vehicle or for you know something that deals with temperatures or chemical reactions or things like that and and modeling those things at the lowest level you can get a very accurate model but you know the computational time is just so long that it doesn’t give you the freedom to really iterate on your engineering design. And so the other use case that we’re seeing for these reduced order models is actually well to take an initial physics-based model and then run some simulations of it and approximate that with an AI model. So you’re actually training an AI model to the outputs of the physics-based model. And now you can iterate with that AI model which maybe you’ve lost some accuracy but you’ve gained a ton in terms of inference speed. And you know, as an engineer now, you can use that model to iterate and iterate, tune your parameters, you know, get them into a place where you’re feeling more comfortable. And then you can go back to that physics-based model to make sure that, you know, things you can validate and things still work with that. But it’s really kind of this helpful design time tool as opposed to, you know, we think the virtual sensing is more of a deployed or an embedded or an operational use of AI. You know, it can also be really helpful as a design time tool for engineers with reduced order modeling. Fantastic. Well guys, I think that’s all I have time for today. Thank you so much for joining me, Seth. Yep. Thanks, Amelia. And thank you for joining me, Human. Thank you so much, Amelia. Well, folks, that’s all I’ve got for this week’s Fish Fry. If you’d like even more information about the topics covered in today’s show, head on over to this week’s fish frying page on ejournal.com or check out this week’s YouTube episode as well. Hey, have you checked out EEJournal on social media yet? Well, you should. You can find us at facebook.com/ejournal. If LinkedIn is more your thing, I get it. You can follow us or me on LinkedIn and we are also on BlueSky and Mastodon too and we have that YouTube channel youtube.com/ejournal Folks, it is chalk full of all kinds of techie videos including our very popular chalk talk webcast series and our animated series called Libby’s Lab. And of course, you can subscribe to our EEJ Journal YouTube channel as well. Thank you everyone for tuning in. If you know of any cool new technology or heck, you just want to chat, shoot me a line at Amelia, that’s Amelia@eejournal.com or post a comment on our forums on ejournal. for the week of February 6th, 2026. I’m Amelia Dalton and you’ve been fried.


