# My Theory on DC Motor and Controller Matching



## deadtoaster2 (Oct 23, 2008)

please do. thanks for the info.


----------



## Gary Sconce (Oct 4, 2008)

Thanks so much for putting out the effort for us like you are doing. It is greatly appreciated.


----------



## TheSGC (Nov 15, 2007)

Here's the table so far:

Advanced DC | Peak Motor AMPs | Programmable | Discrete
A00-4009 | 350 AMPs | Up to 72 Volts 400+ AMP Peak | Curtis 1209B, 1221C or 1231C
K91-4003 | 350 AMPs | Up to 96 Volts 400+ AMP Peak | Curtis 1221C or 1231C
L91-4003 | 750 AMPs | Up to 120 Volts 750+ AMP Peak | Curtis 1231C
X91-4001 | 600 AMPs | Up to 144 Volts 600+ AMP Peak | Curtis 1231C
203-06-4001 | 800 AMPs | Up to 144 Volts 800+ AMP Peak | Curtis 1231C
FB1-4001 | 900 AMPs | Up to 156 Volts 1000+ AMP Peak | Curtis 1231C


----------



## Tesseract (Sep 27, 2008)

TheSGC said:


> ...
> Now here is some good info on the controllers. Logisystems, Kelly, Zilla and Alltrax are programmable units. Which means that they use a microprocessor. But this also means the current limiting "sensing" is somewhat slower than a traditional discrete component like the Curtis brand. This has to do with the Analog to Digital conversion done in the controller versus a quick OP-AMP system.
> 
> So when choosing a programmable controller, check your motor peak current and make sure the PEAK controller current and the CONTINUOUS controller current are equal to or better than your motor.
> ...


You have identified the real Achilles' Heel here, TheSGC. That said, it need not be the case that all programmable controllers can't respond to overcurrent fast enough while all discrete always will. Some other things to consider, then:

* Even fast op-amps make extremely lethargic comparators, and the ubiquitous LM324 (used in the Curtis for pretty much every function from PWM generation to current limit sensing) is about as *slow* as they come. All op-amps make poor comparators, but bipolar op-amps (such as the LM324) are the worst. They typically will take several hundred microseconds to slew their output from one saturated state to the other. Since even short-circuit rated IGBTs are typically guaranteed to withstand a short for only 10uS, several hundred uS clearly ain't gonna cut it.

* Microcontrollers are perfectly capable of being used to monitor "average" currents, temperatures, etc. and respond to changes in such accordingly. When critical response time is needed, a true comparator (e.g. LM393) should be used to take over control locally (e.g. - short out gate drive). A fault signal should be sent back to the uC at which point, probably some hundreds of instruction cycles later, it will then duplicate the efforts of the comparator and terminate the gate drive signal. This is *probably* the technique used in the Zilla. It's certainly the technique I'm using for my nascent controller.

* Failure from catastrophic overcurrent in most semiconductor devices leads to bond-wire vaporization. This is a benign failure mode (controller just stops). Failure from overvoltage (called, appropriately enough, avalanche) usually results in a short-circuit. This is a not-so-benign failure mode (controller is stuck on). Usually, though, the extreme current that flows as a result of avalanche soon causes the bond-wires to vaporize as well, which makes it difficult to determine what the true culprit was.


----------



## vgslimo (Oct 21, 2008)

so your nascent controller? is it for sale or where can we see it?


----------



## Tesseract (Sep 27, 2008)

vgslimo said:


> so your nascent controller? is it for sale or where can we see it?


I don't want to hi-jack this thread, vgslimo, nor appear to be self-promoting so let's just say that the controller is not for sale and cannot be seen by anyone except for me and the people that are paying me to design it.

***

Further thoughts on controller failure:

If you look inside a modern VFD for industrial applications you'll always find IGBT modules (either half-bridge of fully integrated "intelligent power modules"), not a bunch of little MOSFETs or IGBTs in parallel. The short answer why is because it's very difficult to get multiple devices to share current equally. This applies even to MOSFETs with their positive drain-source resistance temperature coefficient which certainly helps them share current _under static conditions_. It is during the switching transitions, though, that failure almost always occurs. Consider, for example, the Kelly KD12600 which uses a bank of 12 MOSFETs to achieve a 600A rating, thus, each MOSFET will be expected to carry a maximum of 50A. During the on time this will naturally occur as a result of the positive tempco of Rds[on] - as one MOSFET gets hotter it's drain-source resistance will rise, diverting current away from it. But what happens if one MOSFET out of the bank has a lower gate threshold voltage - that is, it will turn on before the other MOSFETs and turn-off last? It is the mismatching of the _dynamic_, or switching, characteristics of the MOSFETs that really causes problems. That one poor MOSFET will have to suffer a huge pulse of current at turn-on and all of the dI/dt spike produced during turn-off (if, as sadly seems to often be the case, there is no RCD snubber fitted). Obviously, the more MOSFETs (or IGBTs) in parallel, the greater the chance that one will be mismatched in switching speed/gate charge/threshold voltage/etc. from the rest. There are ways to mitigate this problem using series inductors, snubbers, etc., but the general rule of thumb is that if you need to parallel more than four devices you ought to pick a bigger device. This applies to the freewheeling diodes as well - one must consider turn-on (forward recovery) and turn-off (reverse recovery) times as well as the more typical forward voltage and peak current ratings.


----------



## TheSGC (Nov 15, 2007)

This is just a theory, I know there could be a thousand things wrong from low quality MOSFETs to a crappy copper trace. 

A DC motor is going to do it's best to draw as much current as possible, and if the parts aren't rated for it, or the processor/parts/software is just too slow to limit it, things are going to get messy. 

I just think, to be on the safe side, people should match their motors to a proper controller to limit the possibility of the motor causing stress or damage to a controller. If your controller has to start to limit the current the second you hit the pedal, I think the controller is too small for the motor. 

Sorry for the EVil terms, but it's like putting a 4 cylinder in a Chevy Suburban. The SUV is going to strain the crap out of it until it just dies.


----------



## Qer (May 7, 2008)

Tesseract said:


> When critical response time is needed, a true comparator (e.g. LM393) should be used to take over control locally (e.g. - short out gate drive).


Also, if you want to have a programmable peak current (usually you don't since the thing you want to protect is the transistor module and the specs are known) the recommended way in microcontrollers to do that is not to use the built in ADC (they're usually rather slow) but instead a comparator where the reference voltage is generated from the microcontroller either with a DAC or a LP-filtered PWM.



TheSGC said:


> I just think, to be on the safe side, people should match their motors to a proper controller to limit the possibility of the motor causing stress or damage to a controller. If your controller has to start to limit the current the second you hit the pedal, I think the controller is too small for the motor.


Personally I think that it's better to have a controller with working current limitation instead. The problem is that motors are close to a short circuit at 0 RPM and it's simply not realistic to construct a controller that can handle that current spike. To begin with it'll make the controller unreasonably expensive and complex (which noone probaby would be prepared to pay for anyway) and then you get secondary problems like if the battery pack can handle the load, if you'll turn the drive train into a cork screw or if it's possible to get all that torque down in the ground without the wheels spinning madly, which is just waste of energy (and rubber).

So I'd say it doesn't matter if the controller can toast the motor and blow the battery pack to Kingdom come with brute force, it's simply an unpractical approach. To me the right way seems to be to ramp up the PWM over time, slow enough to give the microcontroller a chance to accurately monitor the motor current and limit it to the maximum level but fast enough to give the driver the feeling that if (s)he slams the accelerator the car reacts pretty immediately.

Even an ICE has a response time that is quite measureable and we've been able to live with that for ages so I don't think that an EV has to react in microseconds. To create a controller that slams the duty cycle of the PWM to 100% without keeping track of the current is, well... Let's just say that I personally, as an engineer, wouldn't be proud over that solution.


----------



## TX_Dj (Jul 25, 2008)

TheSGC said:


> First, the peak AMP of the controller must be equal to or more than the peak motor current. This is because a DC motor acts like a short at a dead stop and can draw the max current at this time, which also happens to be the same time a lot of the controllers have been failing at.



Which is *exactly* the reason why Curtis controllers "whine."

They do so at low motor voltages, because they shift from a 15 kHz PWM to a 1.5 kHz PWM. The 1.5 kHz sets up an audible resonance (as does the 15, but many folks can't hear frequencies that high... i.e. if you don't hear CRT monitors or TVs "whine" you won't hear the higher frequency PWM either).

When at these low motor RPMs, it's easier to limit the current with longer PWM cycles. The current comes up so very fast, and takes so long to drop at low RPMs, that pulsing at a lower frequency becomes a very effective way to prevent the current from exceeding what the controller can handle.

Again... Curtis has years and years of experience. The "whine" isn't a design flaw, it's a purpose-built feature to eliminate the very condition that you're saying is when most of the cheaper controllers "pop".


----------



## TheSGC (Nov 15, 2007)

TX_Dj said:


> Which is *exactly* the reason why Curtis controllers "whine."
> 
> They do so at low motor voltages, because they shift from a 15 kHz PWM to a 1.5 kHz PWM. The 1.5 kHz sets up an audible resonance (as does the 15, but many folks can't hear frequencies that high... i.e. if you don't hear CRT monitors or TVs "whine" you won't hear the higher frequency PWM either).
> 
> ...


 
I know why the whine exists (same reason my 555 controller is only 2 KHz) but the Curtis' have also been blowing with the new Warp 11 and 13 inch motors. 

BTW, this "whine" could be installed in any Kelly or Logisystems controller with a few lines of code.

Maybe because I am a CE student, I would always make sure my controller could handle the motor at any situation and make sure the MOSFETS/IGBTs could handle the maximum current spikes of these motors.


----------



## Qer (May 7, 2008)

TX_Dj said:


> When at these low motor RPMs, it's easier to limit the current with longer PWM cycles. The current comes up so very fast, and takes so long to drop at low RPMs, that pulsing at a lower frequency becomes a very effective way to prevent the current from exceeding what the controller can handle.


If your controller is using analogue feedback loops, yes. There's no need for a microcontroller to step down the frequency at low RPM's since it's possible to write the software in a much more complex way than it's realistic to realise in analogue hardware.



TX_Dj said:


> Again... Curtis has years and years of experience. The "whine" isn't a design flaw, it's a purpose-built feature to eliminate the very condition that you're saying is when most of the cheaper controllers "pop".


Not eliminate, they're simply just increasing the marigins. If stepping down from 15 kHz to 1.5 kHz means that the controller survives the nastiness that kills the controller when it's loaded with an inductance of x uH it still means that the same situation will much likely occur again somewhere around 10*x uH.

Now, I'm not saying that it's a bad fix, on the contrary from what I've read in this forum (especially Tesseracts autopsy of the Kelly controller) I'd say that it's a pretty smart solution and as things are now I'd go for a Curtis as well, but it's still just a quick and dirty fix. It doesn't take the problem away, it just hides it somewhere where noone hopefully won't step in it.

However, what I hope for is some 21'th century technology. Analogue feedback loops are reliable and well tried, I'll grant you that, but they're also very inflexible and in the case of motor controllers for mad DIY'ers that might connect them to about anything from an old, manually rewound vacuum cleaner motor to a twin Warp 13 there ought to be situations where the analogue feedback loop will go totally lost, oscillating or whamming into a max- or min value, burning up a few hundreds, or thousands, of dollars in the process.

A microcontroller with proper software will have an easier time to adjust to the situation when things go extreme. But then again, with crappy software things like this also happens:

http://www.youtube.com/watch?v=kYUrqdUyEpI

Still, in the long term I think the digital controllers will arrive and conquer, like in all other markets. And as with all other electronics system reliability will be improved as a result. Analogue stuff is sooo 20'th century.


----------

