Designing for energy performance (typically measured in terms of efficiency metrics) implies a focus on optimizing the performance of power supply energy consumption, which translates to emphasizing operational expenditures (OPEX) or, essentially, the cost of energy. If a power solution is optimized for its form-factor performance, this may conflict with maximal conversion efficiency, effectively focusing the design on capital expenditure (CAPEX) or prioritizing upfront cost savings instead of the amortized cost savings achieved through reduced OPEX.
This distinction can be critical in applications where power OPEX dominates the total cost of ownership (TCO), such as in large-scale data centers.
For untethered applications, power OPEX can be expressed in terms of fuel, range, and/or battery life. Typically, these limited energy sources act as the controlling factors in maximizing system performance. Therefore, it is imperative for engineers to understand the sometimes very complex relationships between supply, load, and the operating environment before articulating which performance factor(s) should be the focus of optimization.
For power solutions, most design parameters ultimately converge on thermal management, keeping critical components (semiconductor junctions, package surface temperatures, printed circuit board, or PCB temperatures) below critical thresholds under worst-case operating conditions, such as maximum input voltage, full load, and high ambient temperature.
Powering Up System Uptime Performance
If output voltage regulation and accuracy are most critical, then optimizing power supply control loop performance (feedback loop stability and load transient response) may take precedence to ensure power delivery remains stable and predictable during abrupt load changes or supply voltage dips and surges.
When operational reliability is the top priority, it typically involves a mission-critical task in which the key performance indicator is the uptime of the application or system itself. In this scenario, system requirements may even call for sacrificing the power supply and other equipment to keep the application running as long as possible, even if operating conditions exceed specifications. This approach differs from designing power supplies with built-in shutdown protection for short-term overloads, overcurrent, or over-temperature conditions.
While not always recognized as bottlenecks in application performance, power and thermal metrics are often primary limiting factors due to fundamental physics. Limitations may arise from maximum junction temperature of a power semiconductor device, maximum current of a line cord, or a power inductor, but performance is ultimately throttled by power and thermal constraints. System performance can also be derated to maintain an overall thermal envelope or thermal partitions/zones. For example, a processor may handle additional millions of instructions per second (MIPS), or a radio may have extra headroom to further amplify an RF signal, but insufficient thermal management prevents utilizing the added dissipated power effectively.
Optimizing Energy Performance and Power Efficiency
Power is often taken for granted, not only regarding the complexity and specialized requirements of power solutions but also in terms of availability. As noted earlier on thermal bottlenecks, systems may be designed with a significant gap between peak load demand and the power source capacity to save costs or fit the power supply into a smaller space. When loop control and transient design challenges are neglected, a power gap can also occur if the power subsystem analysis used an insufficient margin of safety and did not account for all loads sourced by a common rail or aggregated upstream rails.
Most power subsystems involve multiple levels of voltage conversion from offline AC to mid-bus voltages (typically 48/24/12Vdc) and finally to low voltage for ASICs and logic circuits (typically ≤5Vdc). Greater attention is typically given to the efficiency of lower-level voltage rails, where load current increases as bus voltage decreases, making dissipated losses more dominant and critical to overall thermal management. Even with careful attention at the load end, impacts from upstream power conversion can be overlooked. Therefore, it is vital to develop an interactive system power budget model that considers load vs. efficiency curves for all power supplies and the aggregate effects of transient performance from end load all the way upstream to the offline source.
Is Power Supply Performance Disaggregated from System Performance?
Of course not. Although sometimes misperceived, every electronic system requires power, and there is a direct relationship between power supply performance and overall system success. This relationship is often oversimplified (supply turns on, system powers up), ignoring the stability of the power supply under load. If the load demands greater transient response than the power supply can provide, this can destabilize the control loop, causing poor voltage regulation, startup failure, unintended tripping of protection circuits, excessive electromagnetic emissions, or EMC issues.
A power solution’s viability also depends on environmental conditions. It is common to derate a power supply’s rated output at lower supply voltages, ultimately limited by thermal bottlenecks. More available power does not always mean higher load usage if dissipated power is constrained. Operating at higher elevations with lower atmospheric pressure requires further derating, as 1000ft (300m) roughly adds 1°C ambient temperature at sea level. Isolation requirements must also be reinforced at high altitudes to prevent flashover. For this reason, RECOM specifies operating altitude in
AC/DC power supply datasheets.
Summary: Power Supply Design for Reliability, Efficiency, and Thermal Performance
Some deployment scenarios and power architectures intentionally attempt to disaggregate power supply performance from system performance. This typically refers to redundant applications with a power source budget greater than the system power budget. Redundant power supplies sharing a common load bus tend to current-share (even within 10%), meaning each supply operates well below its maximum rated output.
The most fundamental redundancy is an N+1 configuration, where two (usually identical) power supplies share system load, even though a single unit can handle the full load. With system power budget margins, supplies may run at 30–40% of maximum rated output, even at peak system demand. If both the power supplies and system were designed and qualified for full load, the demonstrated life performance of the power supply would differ significantly from that of the system due to doubled thermal stresses on the system components.
Another example of disaggregation is load shedding/sharing. Sizing power solutions based solely on worst-case maxima is often unnecessary, as not all loads peak simultaneously, preventing overdesign. However, overdesign can lead to larger, costlier, and less efficient power solutions than required. If major system loads are in antiphase (e.g., compute and memory), a smaller power solution can be used with intelligent power management (IPM) techniques.
Some power supplies are designed to ride through short-term overcurrent events without triggering overcurrent protection (OCP). For example, systems with multiple
Power-over-Ethernet (PoE) ports may demand 120% of rated load for <200ms. In these cases, the power supply can ride through events without tripping OCP while maintaining short-circuit protection (SCP) for longer overcurrent events. Digital control cores, such as the
RACM1200-V from RECOM, allow easy programming of power supply responses to these events.