Changing OpenXWD calculation frequency?

TLDR: Is it possible to change the frequency of how often the powertrain simulation is being run inside Simulink?

We are using Carmaker to develop torque vectoring for our electric car. After prototyping some algorithms in Simulink, our team has rewritten all of them in C so it can run on the target hardware. Then they have moved to using hardware-in-the-loop, where all the virtual sensor data is sent via UART to the STM, all the calculations are being done on the microcontroller like in the real car, and then the data is sent back via UART back to Simulink and then that controls the torque sent to the wheels.

Is it possible to change the frequency of how often the powertrain subsystem is being executed? We would want to match the frequency of how often the Torque Vectoring and other algorithms are being calculating on the actual car, e.g. (100Hz). That way the microcontroller would receive new sensor data as often as on the actual car.
This would also help with simulation’s performance, as right now we it is working slower than realtime for us, and thus it is pointless to use CockpitPackage.

If we can’t do so we can manually inside Simulink implement sending data at specified intervals, but that will complicate the code and not help with simulation performance.

Hello Jacek,

If you are using CarMaker for Simulink, all the green blocks(s functions that contain core functionality of CarMaker) have to be run as 1ms timestep, and in the exact order it was given to you.

I believe the current bottleneck is due to how slow UART and how slow the STM microcontroller is, so I suggest that you only should slow down the interface. If you are interfacing with your microcontroller in C, you should condition it to send and receive data based on the modulus of the current cycle number. That way you can send and receive only once in n cycles. If you are interfacing with a simulink block, you should put rate transition blocks before and after the interface block.

This way you can achieve balance between the simulation speed and communication speed without adjusting the timestep, and I think that this will actually benefit you in terms of simulation performance as well, as long as there are no other major bottlenecks inside what you are running on the host machine.

Best,
Yeonsoo Park

1 Like