[Soekris] high CPU usage for network interrupts on 4801
glisten at witworx.com
Mon Sep 17 12:10:24 UTC 2007
On Mon, 17 Sep 2007 07:49:28 -0400, Bob Camp wrote:
>If you had the silicon resources you would directly ID each device as
>it interrupted. Back a long time ago (as in I had hair on my head
>then) there wasn't enough silicon available for that kind of thing.
>Today it would be pretty easy to do chip wise. Changing every
>motherboard, card, and OS in existence might create a few problems
>Interrupt sharing has been a way of life since the 80286. You can
>fiddle and poke what you put where, but you still put multiple
>devices on each one. There simply are not enough interrupts available
>to do it any other way.
Hehe! Try going back a bit further. Having written plenty of 8080 code
I can tell you that very early in its life we ran out of interrupts. It
turned out to be no big deal to check an interrupt status and thereby
use one INT for 8 ports.
Careful planning of the bit priority for the status allowed the higher
priorities to be handled first. Even at the breathtaking CPU clock
speed (2MHz) we were not missing critical ints in real-time industrial
Don't get me started on 4040 code! It never happened, I swear, I was
>You could argue that on an single board device you don't have the
>same constraints. That's true as far as it goes. In order to get the
>silicon changed what ever you do will have to work with that OS from
>Redmond. If it doesn't then there's no volume / money to do it.
I haven't looked at the code but there are 4port NICs using the DEC
(designed, more recently Intel made) ethernet chips where the brands
using one int for the whole board beat the crap out of boards with an
int per port. Check the OpenBSD code for the de or dc drivers. The
clues must be there somewhere.
Me...a skeptic? I trust you have proof.
More information about the Soekris-tech