Not all controls are designed equal, not all risks are equal that we know to be a potential truism, but the deviation between these two places nearly always leaves some organisations exposed.
How a typically bad audit plays out: Before auditors assess internal control effectiveness, they usually start out with some loose definition of the concept, perhaps something that goes along the following lines ... "Control Effectiveness is a process for assuring achievement of an organisation's objectives in operational effectiveness and efficiency, reliable financial reporting, and compliance with laws". This candidate for an internal control definition can be found here [LINK].
Then after defining the superficial, the team of reviewers enter a business unit and hunt for controls without risks. In effect, they are attempting to assess whether general things are 'effective' against an empty scope. It's a good business of bad governance and a lot of people are doing it.
The problem with this unfastened approach to auditing is that a single and empty definition won't help auditors with the purposeful assessment of an outcome. We have to remember, many risks, certainly the serious ones give us little experience or at best, only historical statistics.
What's worse is that without considering the state in which a control should be adequate, i.e. when you really need it, when you are facing a threat head-on, you never really know whether your state of preparedness is even designed correctly, let alone sufficient for the situation at hand. It's a dilemma of not living the risk, but just hypothesising about it and such oversight isn't worth paying too much for in my opinion.
This method of working is potentially horrible and it results in the persistence of ridiculous business practices so that internal audit can elevate itself into a bureaucracy.
In our images, we can perceive a fire exit that is laden with obstructions as perhaps not being the most effective, whatever that outcome truly feels like. However, if we look at the fire exit on the left image, it may appear effective and ready to go but it is probably not even fit for purpose from the outset. For what it's worth, all the controls we have shown are a fail.
Assessing controls should be a purposeful exercise one would hope, an exercise that is objective rather than subjective and consequently when we witness any of the following being stated in an audit report, we need to accept the assessment process is potentially originating from that broken bureaucratic ideological corner.
1. The control is 100% effective --> Busted mentality2. There is no risk here --> Busted mentality3. Residual risk is zero --> Busted mentality
4. When can you turn a control off?
5. There have been no events --> People only see risk after an event has occurred
Most solid auditors I have met would see the first three statements above as an inappropriate response to an audit, but they may also be comfortable with the fifth assertion: "There have been no events" or should we say, thank goodness there have been no events.
Good Risk~Control should accept that a risk is an event, but also it's just as much a state of being. It is not the luck that something hasn't occurred. Good risk management is when there are no events, and bad risk management is when something happens is a conditional type of thinking that will lead us to a blame culture and auditors need to move beyond this conceptual interpretation of control effectiveness.
Risk Response Maturity
Really robust risk~control environments moves far beyond these subtleties and would at the very least have the following in place:
A clear appreciation of potential outcomes from a portfolio of objectives. That is, good and bad responses are understood at a level that  why these states occur and  how to detect them is in the common knowledge set possessed by operators. More so, an 'engineered response' for different conditional positions is also present ~ that is 'Design Effectiveness'!
Finally, are the operators able to function as planned when the risk state changes? Do they have the capability, capacity, the authority and the confidence to effect control will define whether the control is 'Operationally Effective', and that is why we do risk assessments and drills.
So then, can we have 100% risk~control effectiveness?
Perhaps only when the control is self-healing or even better, when the objective outperforms because of a disaster rather than what is expected. Either way, all of this has to be planned and not by accident for us to say, it's 99.9996% effective.