Tektronix Technical Forums are maintained by community involvement. Feel free to post questions or respond to questions by other members. Should you require a time-sensitive answer, please contact your local Tektronix support center here.
I know being over-careful is good practice especially for presentation to the management but factual issues should not be distorted for that sake.
What I mean if an FPGA low level functionality (registers, luts, blocks, and successful timing) doesn't work as expected for each and every clock edge then we ought to give up or we add mitigation logic of some type.
(Check out if you need this basic understanding of FPGAs: http://www.apogeeweb.net/article/67.html).
To my understanding mitigation is the domain of safety critical applications or aerospace and it requires small sized designs with plenty plenty of work to detect or correct events such as SEU...etc.
But I notice many fpga engineers carry some remnants of mitigation mindset everywhere and it is this that I hate...
The most famous example: if a state machine enters unreachable state and then what?
Is it enough just to add when others ? and does various state coding imply fpga registers could go wrong?
I believe we should then reset the state and then reset all associated signals and reset input module and reset output module and well reset all system and don't worry about your design switching on and off as a result. Else a partial mitigation is pointless.
Another example "if count = n ..." and somebody would review the code and say no sir add if count > n as well to make it more solid just in case.
My answer is that my counter is meant not to be more than n but less and I should not bother about unreachable state of > n. I do not want to mask any potential bugs...
There might be tens of thousands or hundreds of thousands of registers in a design so why just make some more solid and forget others.
so in short we either mitigate faults correctly when the application is critical or assume fpga will work and should be trusted instead of half-baked solutions for non-critical cases.
Here we need to differentiate between fpga primary logic per se and user made functionality out of that logic.
FPGA primary logic should be trusted but user design may not be. Flaky designs are common and mitigation may be considered here but can also mask bugs or patch them up.
For example there might be cases when some logic is locking to some signal and loss of lock may need to be mitigated.
Another example: some designs decide I/Q pairing from a serial I-Q-I-Q stream based on just one single initial check but I would prefer to have self-correcting logic if I am not sure about my simulation cover or not confident about the module that generates the stream of I-Q-I-Q designed by my next door neighbour
This sounds like a design -unfairly- needs to check and correct its inputs which is the responsibility of input module.
Any comments appreciated.
Who is online
Users browsing this forum: No registered users and 3 guests