mirror of
https://github.com/torvalds/linux.git
synced 2026-03-08 04:04:43 +01:00
Power management updates for 6.20-rc1/7.0-rc1
- Remove the unused omap-cpufreq driver (Andreas Kemnade)
- Optimize error handling code in cpufreq_boost_trigger_state() and
make cpufreq_boost_trigger_state() return -EOPNOTSUPP if no policy
supports boost (Lifeng Zheng)
- Update cpufreq-dt-platdev list for tegra, qcom, TI (Aaron Kling,
Dhruva Gole, and Konrad Dybcio)
- Minor improvements to the cpufreq and cpumask rust implementation
(Alexandre Courbot, Alice Ryhl, Tamir Duberstein, and Yilin Chen)
- Add support for AM62L3 SoC to the ti-cpufreq driver (Dhruva Gole)
- Update arch_freq_scale in the CPPC cpufreq driver's frequency
invariance engine (FIE) in scheduler ticks if the related CPPC
registers are not in PCC (Jie Zhan)
- Assorted minor cleanups and improvements in ARM cpufreq drivers (Juan
Martinez, Felix Gu, Luca Weiss, and Sergey Shtylyov)
- Add generic helpers for sysfs show/store to cppc_cpufreq (Sumit
Gupta)
- Make the scaling_setspeed cpufreq sysfs attribute return the actual
requested frequency to avoid confusion (Pengjie Zhang)
- Simplify the idle CPU time granularity test in the ondemand cpufreq
governor (Frederic Weisbecker)
- Enable asym capacity in intel_pstate only when CPU SMT is not
possible (Yaxiong Tian)
- Update the description of rate_limit_us default value in cpufreq
documentation (Yaxiong Tian)
- Add a command line option to adjust the C-states table in the
intel_idle driver, remove the 'preferred_cstates' module parameter
from it, add C-states validation to it and clean it up (Artem
Bityutskiy)
- Make the menu cpuidle governor always check the time till the closest
timer event when the scheduler tick has been stopped to prevent it
from mistakenly selecting the deepest available idle state (Rafael
Wysocki)
- Update the teo cpuidle governor to avoid making suboptimal decisions
in certain corner cases and generally improve idle state selection
accuracy (Rafael Wysocki)
- Remove an unlikely() annotation on the early-return condition in
menu_select() that leads to branch misprediction 100% of the time
on systems with only 1 idle state enabled, like ARM64 servers (Breno
Leitao)
- Add Christian Loehle to MAINTAINERS as a cpuidle reviewer (Christian
Loehle)
- Stop flagging the PM runtime workqueue as freezable to avoid system
suspend and resume deadlocks in subsystems that assume asynchronous
runtime PM to work during system-wide PM transitions (Rafael Wysocki)
- Drop redundant NULL pointer checks before acomp_request_free() from
the hibernation code handling image saving (Rafael Wysocki)
- Update wakeup_sources_walk_start() to handle empty lists of wakeup
sources as appropriate (Samuel Wu)
- Make dev_pm_clear_wake_irq() check the power.wakeirq value under
power.lock to avoid race conditions (Gui-Dong Han)
- Avoid bit field races related to power.work_in_progress in the core
device suspend code (Xuewen Yan)
- Make several drivers discard pm_runtime_put() return value in
preparation for converting that function to a void one (Rafael
Wysocki)
- Add PL4 support for Ice Lake to the Intel RAPL power capping
driver (Daniel Tang)
- Replace sprintf() with sysfs_emit() in power capping sysfs show
functions (Sumeet Pawnikar)
- Make dev_pm_opp_get_level() return value match the documentation
after a previous update of the latter (Aleks Todorov)
- Use scoped for each OF child loop in the OPP code (Krzysztof
Kozlowski)
- Fix a bug in an example code snippet and correct typos in the energy
model management documentation (Patrick Little)
- Fix miscellaneous problems in cpupower (Kaushlendra Kumar):
* idle_monitor: Fix incorrect value logged after stop
* Fix inverted APERF capability check
* Use strcspn() to strip trailing newline
* Reset errno before strtoull()
* Show C0 in idle-info dump
- Improve cpupower installation procedure by making the systemd step
optional and allowing users to disable the installation of systemd's
unit file (João Marcos Costa)
-----BEGIN PGP SIGNATURE-----
iQFGBAABCAAwFiEEcM8Aw/RY0dgsiRUR7l+9nS/U47UFAmmDr5ASHHJqd0Byand5
c29ja2kubmV0AAoJEO5fvZ0v1OO1Q8oH/0KRqdidHzesIQl6gd5WSS/sWdxODRUt
R9dEGQQ6LXCY0z05RAq29HZQf618fYuRFX4PSrtyCvrcRJK7MJKuzK55MRq0MC3c
c/2pL1PdpHexjLXUP9pcoxrYjetsr7SnD6Y0M3JfOPg1E/bG8sp1DlnE8cdqrL0W
lrdB2cEGewT2SVkNhCIQ2n6bwfQwmLlfQl1vXTM8BA7xCjoslePUJlRphAFVAt/J
5fQxSOH0eSxK5PYQFUDM2D2J3uMAN0pFb6eIjwVYYqjABqV//BPl99Rv2W3ElJq7
K/SICRWlvzyINCgF15QAUtQHWdINxSb0GzovECVxODHOv0N4mKHdpNU=
=QlVe
-----END PGP SIGNATURE-----
Merge tag 'pm-6.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"By the number of commits, cpufreq is the leading party (again) and the
most visible change there is the removal of the omap-cpufreq driver
that has not been used for a long time (good riddance). There are also
quite a few changes in the cppc_cpufreq driver, mostly related to
fixing its frequency invariance engine in the case when the CPPC
registers used by it are not in PCC. In addition to that, support for
AM62L3 is added to the ti-cpufreq driver and the cpufreq-dt-platdev
list is updated for some platforms. The remaining cpufreq changes are
assorted fixes and cleanups.
Next up is cpuidle and the changes there are dominated by intel_idle
driver updates, mostly related to the new command line facility
allowing users to adjust the list of C-states used by the driver.
There are also a few updates of cpuidle governors, including two menu
governor fixes and some refinements of the teo governor, and a
MAINTAINERS update adding Christian Loehle as a cpuidle reviewer.
[Thanks for stepping up Christian!]
The most significant update related to system suspend and hibernation
is the one to stop freezing the PM runtime workqueue during system PM
transitions which allows some deadlocks to be avoided. There is also a
fix for possible concurrent bit field updates in the core device
suspend code and a few other minor fixes.
Apart from the above, several drivers are updated to discard the
return value of pm_runtime_put() which is going to be converted to a
void function as soon as everybody stops using its return value, PL4
support for Ice Lake is added to the Intel RAPL power capping driver,
and there are assorted cleanups, documentation fixes, and some
cpupower utility improvements.
Specifics:
- Remove the unused omap-cpufreq driver (Andreas Kemnade)
- Optimize error handling code in cpufreq_boost_trigger_state() and
make cpufreq_boost_trigger_state() return -EOPNOTSUPP if no policy
supports boost (Lifeng Zheng)
- Update cpufreq-dt-platdev list for tegra, qcom, TI (Aaron Kling,
Dhruva Gole, and Konrad Dybcio)
- Minor improvements to the cpufreq and cpumask rust implementation
(Alexandre Courbot, Alice Ryhl, Tamir Duberstein, and Yilin Chen)
- Add support for AM62L3 SoC to the ti-cpufreq driver (Dhruva Gole)
- Update arch_freq_scale in the CPPC cpufreq driver's frequency
invariance engine (FIE) in scheduler ticks if the related CPPC
registers are not in PCC (Jie Zhan)
- Assorted minor cleanups and improvements in ARM cpufreq drivers
(Juan Martinez, Felix Gu, Luca Weiss, and Sergey Shtylyov)
- Add generic helpers for sysfs show/store to cppc_cpufreq (Sumit
Gupta)
- Make the scaling_setspeed cpufreq sysfs attribute return the actual
requested frequency to avoid confusion (Pengjie Zhang)
- Simplify the idle CPU time granularity test in the ondemand cpufreq
governor (Frederic Weisbecker)
- Enable asym capacity in intel_pstate only when CPU SMT is not
possible (Yaxiong Tian)
- Update the description of rate_limit_us default value in cpufreq
documentation (Yaxiong Tian)
- Add a command line option to adjust the C-states table in the
intel_idle driver, remove the 'preferred_cstates' module parameter
from it, add C-states validation to it and clean it up (Artem
Bityutskiy)
- Make the menu cpuidle governor always check the time till the
closest timer event when the scheduler tick has been stopped to
prevent it from mistakenly selecting the deepest available idle
state (Rafael Wysocki)
- Update the teo cpuidle governor to avoid making suboptimal
decisions in certain corner cases and generally improve idle state
selection accuracy (Rafael Wysocki)
- Remove an unlikely() annotation on the early-return condition in
menu_select() that leads to branch misprediction 100% of the time
on systems with only 1 idle state enabled, like ARM64 servers
(Breno Leitao)
- Add Christian Loehle to MAINTAINERS as a cpuidle reviewer
(Christian Loehle)
- Stop flagging the PM runtime workqueue as freezable to avoid system
suspend and resume deadlocks in subsystems that assume asynchronous
runtime PM to work during system-wide PM transitions (Rafael
Wysocki)
- Drop redundant NULL pointer checks before acomp_request_free() from
the hibernation code handling image saving (Rafael Wysocki)
- Update wakeup_sources_walk_start() to handle empty lists of wakeup
sources as appropriate (Samuel Wu)
- Make dev_pm_clear_wake_irq() check the power.wakeirq value under
power.lock to avoid race conditions (Gui-Dong Han)
- Avoid bit field races related to power.work_in_progress in the core
device suspend code (Xuewen Yan)
- Make several drivers discard pm_runtime_put() return value in
preparation for converting that function to a void one (Rafael
Wysocki)
- Add PL4 support for Ice Lake to the Intel RAPL power capping driver
(Daniel Tang)
- Replace sprintf() with sysfs_emit() in power capping sysfs show
functions (Sumeet Pawnikar)
- Make dev_pm_opp_get_level() return value match the documentation
after a previous update of the latter (Aleks Todorov)
- Use scoped for each OF child loop in the OPP code (Krzysztof
Kozlowski)
- Fix a bug in an example code snippet and correct typos in the
energy model management documentation (Patrick Little)
- Fix miscellaneous problems in cpupower (Kaushlendra Kumar):
* idle_monitor: Fix incorrect value logged after stop
* Fix inverted APERF capability check
* Use strcspn() to strip trailing newline
* Reset errno before strtoull()
* Show C0 in idle-info dump
- Improve cpupower installation procedure by making the systemd step
optional and allowing users to disable the installation of
systemd's unit file (João Marcos Costa)"
* tag 'pm-6.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (65 commits)
PM: sleep: core: Avoid bit field races related to work_in_progress
PM: sleep: wakeirq: harden dev_pm_clear_wake_irq() against races
cpufreq: Documentation: Update description of rate_limit_us default value
cpufreq: intel_pstate: Enable asym capacity only when CPU SMT is not possible
PM: wakeup: Handle empty list in wakeup_sources_walk_start()
PM: EM: Documentation: Fix bug in example code snippet
Documentation: Fix typos in energy model documentation
cpuidle: governors: teo: Refine intercepts-based idle state lookup
cpuidle: governors: teo: Adjust the classification of wakeup events
cpufreq: ondemand: Simplify idle cputime granularity test
cpufreq: userspace: make scaling_setspeed return the actual requested frequency
PM: hibernate: Drop NULL pointer checks before acomp_request_free()
cpufreq: CPPC: Add generic helpers for sysfs show/store
cpufreq: scmi: Fix device_node reference leak in scmi_cpu_domain_id()
cpufreq: ti-cpufreq: add support for AM62L3 SoC
cpufreq: dt-platdev: Add ti,am62l3 to blocklist
cpufreq/amd-pstate: Add comment explaining nominal_perf usage for performance policy
cpufreq: scmi: correct SCMI explanation
cpufreq: dt-platdev: Block the driver from probing on more QC platforms
rust: cpumask: rename methods of Cpumask for clarity and consistency
...
This commit is contained in:
commit
9b1b3dcd28
66 changed files with 637 additions and 550 deletions
|
|
@ -439,7 +439,7 @@ This governor exposes only one tunable:
|
|||
``rate_limit_us``
|
||||
Minimum time (in microseconds) that has to pass between two consecutive
|
||||
runs of governor computations (default: 1.5 times the scaling driver's
|
||||
transition latency or the maximum 2ms).
|
||||
transition latency or 1ms if the driver does not provide a latency value).
|
||||
|
||||
The purpose of this tunable is to reduce the scheduler context overhead
|
||||
of the governor which might be excessive without it.
|
||||
|
|
|
|||
|
|
@ -35,6 +35,7 @@ properties:
|
|||
- description: v2 of CPUFREQ HW (EPSS)
|
||||
items:
|
||||
- enum:
|
||||
- qcom,milos-cpufreq-epss
|
||||
- qcom,qcs8300-cpufreq-epss
|
||||
- qcom,qdu1000-cpufreq-epss
|
||||
- qcom,sa8255p-cpufreq-epss
|
||||
|
|
@ -169,6 +170,7 @@ allOf:
|
|||
compatible:
|
||||
contains:
|
||||
enum:
|
||||
- qcom,milos-cpufreq-epss
|
||||
- qcom,qcs8300-cpufreq-epss
|
||||
- qcom,sc7280-cpufreq-epss
|
||||
- qcom,sm8250-cpufreq-epss
|
||||
|
|
|
|||
|
|
@ -14,8 +14,8 @@ subsystems willing to use that information to make energy-aware decisions.
|
|||
The source of the information about the power consumed by devices can vary greatly
|
||||
from one platform to another. These power costs can be estimated using
|
||||
devicetree data in some cases. In others, the firmware will know better.
|
||||
Alternatively, userspace might be best positioned. And so on. In order to avoid
|
||||
each and every client subsystem to re-implement support for each and every
|
||||
Alternatively, userspace might be best positioned. In order to avoid
|
||||
having each and every client subsystem re-implement support for each and every
|
||||
possible source of information on its own, the EM framework intervenes as an
|
||||
abstraction layer which standardizes the format of power cost tables in the
|
||||
kernel, hence enabling to avoid redundant work.
|
||||
|
|
@ -32,7 +32,7 @@ be found in the Intelligent Power Allocation in
|
|||
Documentation/driver-api/thermal/power_allocator.rst.
|
||||
Kernel subsystems might implement automatic detection to check whether EM
|
||||
registered devices have inconsistent scale (based on EM internal flag).
|
||||
Important thing to keep in mind is that when the power values are expressed in
|
||||
An important thing to keep in mind is that when the power values are expressed in
|
||||
an 'abstract scale' deriving real energy in micro-Joules would not be possible.
|
||||
|
||||
The figure below depicts an example of drivers (Arm-specific here, but the
|
||||
|
|
@ -82,7 +82,7 @@ using kref mechanism. The device driver which provided the new EM at runtime,
|
|||
should call EM API to free it safely when it's no longer needed. The EM
|
||||
framework will handle the clean-up when it's possible.
|
||||
|
||||
The kernel code which want to modify the EM values is protected from concurrent
|
||||
The kernel code which wants to modify the EM values is protected from concurrent
|
||||
access using a mutex. Therefore, the device driver code must run in sleeping
|
||||
context when it tries to modify the EM.
|
||||
|
||||
|
|
@ -113,7 +113,7 @@ Registration of 'advanced' EM
|
|||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The 'advanced' EM gets its name due to the fact that the driver is allowed
|
||||
to provide more precised power model. It's not limited to some implemented math
|
||||
to provide a more precise power model. It's not limited to some implemented math
|
||||
formula in the framework (like it is in 'simple' EM case). It can better reflect
|
||||
the real power measurements performed for each performance state. Thus, this
|
||||
registration method should be preferred in case considering EM static power
|
||||
|
|
@ -172,7 +172,7 @@ Registration of 'simple' EM
|
|||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The 'simple' EM is registered using the framework helper function
|
||||
cpufreq_register_em_with_opp(). It implements a power model which is tight to
|
||||
cpufreq_register_em_with_opp(). It implements a power model which is tied to a
|
||||
math formula::
|
||||
|
||||
Power = C * V^2 * f
|
||||
|
|
@ -251,7 +251,7 @@ It returns the 'struct em_perf_state' pointer which is an array of performance
|
|||
states in ascending order.
|
||||
This function must be called in the RCU read lock section (after the
|
||||
rcu_read_lock()). When the EM table is not needed anymore there is a need to
|
||||
call rcu_real_unlock(). In this way the EM safely uses the RCU read section
|
||||
call rcu_read_unlock(). In this way the EM safely uses the RCU read section
|
||||
and protects the users. It also allows the EM framework to manage the memory
|
||||
and free it. More details how to use it can be found in Section 3.2 in the
|
||||
example driver.
|
||||
|
|
@ -308,12 +308,12 @@ EM framework::
|
|||
05
|
||||
06 /* Use the 'foo' protocol to ceil the frequency */
|
||||
07 freq = foo_get_freq_ceil(dev, *KHz);
|
||||
08 if (freq < 0);
|
||||
08 if (freq < 0)
|
||||
09 return freq;
|
||||
10
|
||||
11 /* Estimate the power cost for the dev at the relevant freq. */
|
||||
12 power = foo_estimate_power(dev, freq);
|
||||
13 if (power < 0);
|
||||
13 if (power < 0)
|
||||
14 return power;
|
||||
15
|
||||
16 /* Return the values to the EM framework */
|
||||
|
|
|
|||
|
|
@ -712,10 +712,9 @@ out the following operations:
|
|||
* During system suspend pm_runtime_get_noresume() is called for every device
|
||||
right before executing the subsystem-level .prepare() callback for it and
|
||||
pm_runtime_barrier() is called for every device right before executing the
|
||||
subsystem-level .suspend() callback for it. In addition to that the PM core
|
||||
calls __pm_runtime_disable() with 'false' as the second argument for every
|
||||
device right before executing the subsystem-level .suspend_late() callback
|
||||
for it.
|
||||
subsystem-level .suspend() callback for it. In addition to that, the PM
|
||||
core disables runtime PM for every device right before executing the
|
||||
subsystem-level .suspend_late() callback for it.
|
||||
|
||||
* During system resume pm_runtime_enable() and pm_runtime_put() are called for
|
||||
every device right after executing the subsystem-level .resume_early()
|
||||
|
|
|
|||
|
|
@ -244,7 +244,7 @@ Example 2.
|
|||
|
||||
|
||||
From these calculations, the Case 1 has the lowest total energy. So CPU 1
|
||||
is be the best candidate from an energy-efficiency standpoint.
|
||||
is the best candidate from an energy-efficiency standpoint.
|
||||
|
||||
Big CPUs are generally more power hungry than the little ones and are thus used
|
||||
mainly when a task doesn't fit the littles. However, little CPUs aren't always
|
||||
|
|
@ -252,7 +252,7 @@ necessarily more energy-efficient than big CPUs. For some systems, the high OPPs
|
|||
of the little CPUs can be less energy-efficient than the lowest OPPs of the
|
||||
bigs, for example. So, if the little CPUs happen to have enough utilization at
|
||||
a specific point in time, a small task waking up at that moment could be better
|
||||
of executing on the big side in order to save energy, even though it would fit
|
||||
off executing on the big side in order to save energy, even though it would fit
|
||||
on the little side.
|
||||
|
||||
And even in the case where all OPPs of the big CPUs are less energy-efficient
|
||||
|
|
@ -285,7 +285,7 @@ much that can be done by the scheduler to save energy without severely harming
|
|||
throughput. In order to avoid hurting performance with EAS, CPUs are flagged as
|
||||
'over-utilized' as soon as they are used at more than 80% of their compute
|
||||
capacity. As long as no CPUs are over-utilized in a root domain, load balancing
|
||||
is disabled and EAS overridess the wake-up balancing code. EAS is likely to load
|
||||
is disabled and EAS overrides the wake-up balancing code. EAS is likely to load
|
||||
the most energy efficient CPUs of the system more than the others if that can be
|
||||
done without harming throughput. So, the load-balancer is disabled to prevent
|
||||
it from breaking the energy-efficient task placement found by EAS. It is safe to
|
||||
|
|
@ -385,7 +385,7 @@ Using EAS with any other governor than schedutil is not supported.
|
|||
6.5 Scale-invariant utilization signals
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In order to make accurate prediction across CPUs and for all performance
|
||||
In order to make accurate predictions across CPUs and for all performance
|
||||
states, EAS needs frequency-invariant and CPU-invariant PELT signals. These can
|
||||
be obtained using the architecture-defined arch_scale{cpu,freq}_capacity()
|
||||
callbacks.
|
||||
|
|
|
|||
|
|
@ -6561,6 +6561,7 @@ F: rust/kernel/cpu.rs
|
|||
CPU IDLE TIME MANAGEMENT FRAMEWORK
|
||||
M: "Rafael J. Wysocki" <rafael@kernel.org>
|
||||
M: Daniel Lezcano <daniel.lezcano@linaro.org>
|
||||
R: Christian Loehle <christian.loehle@arm.com>
|
||||
L: linux-pm@vger.kernel.org
|
||||
S: Maintained
|
||||
B: https://bugzilla.kernel.org
|
||||
|
|
@ -19149,7 +19150,6 @@ M: Kevin Hilman <khilman@kernel.org>
|
|||
L: linux-omap@vger.kernel.org
|
||||
S: Maintained
|
||||
F: arch/arm/*omap*/*pm*
|
||||
F: drivers/cpufreq/omap-cpufreq.c
|
||||
|
||||
OMAP POWERDOMAIN SOC ADAPTATION LAYER SUPPORT
|
||||
M: Paul Walmsley <paul@pwsan.com>
|
||||
|
|
|
|||
|
|
@ -1423,6 +1423,32 @@ out_err:
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(cppc_get_perf_caps);
|
||||
|
||||
/**
|
||||
* cppc_perf_ctrs_in_pcc_cpu - Check if any perf counters of a CPU are in PCC.
|
||||
* @cpu: CPU on which to check perf counters.
|
||||
*
|
||||
* Return: true if any of the counters are in PCC regions, false otherwise
|
||||
*/
|
||||
bool cppc_perf_ctrs_in_pcc_cpu(unsigned int cpu)
|
||||
{
|
||||
struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpu);
|
||||
struct cpc_register_resource *ref_perf_reg;
|
||||
|
||||
/*
|
||||
* If reference perf register is not supported then we should use the
|
||||
* nominal perf value
|
||||
*/
|
||||
ref_perf_reg = &cpc_desc->cpc_regs[REFERENCE_PERF];
|
||||
if (!CPC_SUPPORTED(ref_perf_reg))
|
||||
ref_perf_reg = &cpc_desc->cpc_regs[NOMINAL_PERF];
|
||||
|
||||
return CPC_IN_PCC(&cpc_desc->cpc_regs[DELIVERED_CTR]) ||
|
||||
CPC_IN_PCC(&cpc_desc->cpc_regs[REFERENCE_CTR]) ||
|
||||
CPC_IN_PCC(&cpc_desc->cpc_regs[CTR_WRAP_TIME]) ||
|
||||
CPC_IN_PCC(ref_perf_reg);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cppc_perf_ctrs_in_pcc_cpu);
|
||||
|
||||
/**
|
||||
* cppc_perf_ctrs_in_pcc - Check if any perf counters are in a PCC region.
|
||||
*
|
||||
|
|
@ -1437,27 +1463,7 @@ bool cppc_perf_ctrs_in_pcc(void)
|
|||
int cpu;
|
||||
|
||||
for_each_online_cpu(cpu) {
|
||||
struct cpc_register_resource *ref_perf_reg;
|
||||
struct cpc_desc *cpc_desc;
|
||||
|
||||
cpc_desc = per_cpu(cpc_desc_ptr, cpu);
|
||||
|
||||
if (CPC_IN_PCC(&cpc_desc->cpc_regs[DELIVERED_CTR]) ||
|
||||
CPC_IN_PCC(&cpc_desc->cpc_regs[REFERENCE_CTR]) ||
|
||||
CPC_IN_PCC(&cpc_desc->cpc_regs[CTR_WRAP_TIME]))
|
||||
return true;
|
||||
|
||||
|
||||
ref_perf_reg = &cpc_desc->cpc_regs[REFERENCE_PERF];
|
||||
|
||||
/*
|
||||
* If reference perf register is not supported then we should
|
||||
* use the nominal perf value
|
||||
*/
|
||||
if (!CPC_SUPPORTED(ref_perf_reg))
|
||||
ref_perf_reg = &cpc_desc->cpc_regs[NOMINAL_PERF];
|
||||
|
||||
if (CPC_IN_PCC(ref_perf_reg))
|
||||
if (cppc_perf_ctrs_in_pcc_cpu(cpu))
|
||||
return true;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1647,10 +1647,11 @@ static void device_suspend_late(struct device *dev, pm_message_t state, bool asy
|
|||
goto Complete;
|
||||
|
||||
/*
|
||||
* Disable runtime PM for the device without checking if there is a
|
||||
* pending resume request for it.
|
||||
* After this point, any runtime PM operations targeting the device
|
||||
* will fail until the corresponding pm_runtime_enable() call in
|
||||
* device_resume_early().
|
||||
*/
|
||||
__pm_runtime_disable(dev, false);
|
||||
pm_runtime_disable(dev);
|
||||
|
||||
if (dev->power.syscore)
|
||||
goto Skip;
|
||||
|
|
|
|||
|
|
@ -83,13 +83,16 @@ EXPORT_SYMBOL_GPL(dev_pm_set_wake_irq);
|
|||
*/
|
||||
void dev_pm_clear_wake_irq(struct device *dev)
|
||||
{
|
||||
struct wake_irq *wirq = dev->power.wakeirq;
|
||||
struct wake_irq *wirq;
|
||||
unsigned long flags;
|
||||
|
||||
if (!wirq)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&dev->power.lock, flags);
|
||||
wirq = dev->power.wakeirq;
|
||||
if (!wirq) {
|
||||
spin_unlock_irqrestore(&dev->power.lock, flags);
|
||||
return;
|
||||
}
|
||||
|
||||
device_wakeup_detach_irq(dev);
|
||||
dev->power.wakeirq = NULL;
|
||||
spin_unlock_irqrestore(&dev->power.lock, flags);
|
||||
|
|
|
|||
|
|
@ -275,9 +275,7 @@ EXPORT_SYMBOL_GPL(wakeup_sources_read_unlock);
|
|||
*/
|
||||
struct wakeup_source *wakeup_sources_walk_start(void)
|
||||
{
|
||||
struct list_head *ws_head = &wakeup_sources;
|
||||
|
||||
return list_entry_rcu(ws_head->next, struct wakeup_source, entry);
|
||||
return list_first_or_null_rcu(&wakeup_sources, struct wakeup_source, entry);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(wakeup_sources_walk_start);
|
||||
|
||||
|
|
|
|||
|
|
@ -141,11 +141,6 @@ config ARM_MEDIATEK_CPUFREQ_HW
|
|||
The driver implements the cpufreq interface for this HW engine.
|
||||
Say Y if you want to support CPUFreq HW.
|
||||
|
||||
config ARM_OMAP2PLUS_CPUFREQ
|
||||
bool "TI OMAP2+"
|
||||
depends on ARCH_OMAP2PLUS || COMPILE_TEST
|
||||
default ARCH_OMAP2PLUS
|
||||
|
||||
config ARM_QCOM_CPUFREQ_NVMEM
|
||||
tristate "Qualcomm nvmem based CPUFreq"
|
||||
depends on ARCH_QCOM || COMPILE_TEST
|
||||
|
|
|
|||
|
|
@ -69,7 +69,6 @@ obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o
|
|||
obj-$(CONFIG_ARM_MEDIATEK_CPUFREQ) += mediatek-cpufreq.o
|
||||
obj-$(CONFIG_ARM_MEDIATEK_CPUFREQ_HW) += mediatek-cpufreq-hw.o
|
||||
obj-$(CONFIG_MACH_MVEBU_V7) += mvebu-cpufreq.o
|
||||
obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o
|
||||
obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o
|
||||
obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o
|
||||
obj-$(CONFIG_ARM_QCOM_CPUFREQ_HW) += qcom-cpufreq-hw.o
|
||||
|
|
|
|||
|
|
@ -636,6 +636,19 @@ static void amd_pstate_update_min_max_limit(struct cpufreq_policy *policy)
|
|||
WRITE_ONCE(cpudata->max_limit_freq, policy->max);
|
||||
|
||||
if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
|
||||
/*
|
||||
* For performance policy, set MinPerf to nominal_perf rather than
|
||||
* highest_perf or lowest_nonlinear_perf.
|
||||
*
|
||||
* Per commit 0c411b39e4f4c, using highest_perf was observed
|
||||
* to cause frequency throttling on power-limited platforms, leading to
|
||||
* performance regressions. Using lowest_nonlinear_perf would limit
|
||||
* performance too much for HPC workloads requiring high frequency
|
||||
* operation and minimal wakeup latency from idle states.
|
||||
*
|
||||
* nominal_perf therefore provides a balance by avoiding throttling
|
||||
* while still maintaining enough performance for HPC workloads.
|
||||
*/
|
||||
perf.min_limit_perf = min(perf.nominal_perf, perf.max_limit_perf);
|
||||
WRITE_ONCE(cpudata->min_limit_freq, min(cpudata->nominal_freq, cpudata->max_limit_freq));
|
||||
} else {
|
||||
|
|
|
|||
|
|
@ -54,31 +54,24 @@ static int cppc_perf_from_fbctrs(struct cppc_perf_fb_ctrs *fb_ctrs_t0,
|
|||
struct cppc_perf_fb_ctrs *fb_ctrs_t1);
|
||||
|
||||
/**
|
||||
* cppc_scale_freq_workfn - CPPC arch_freq_scale updater for frequency invariance
|
||||
* @work: The work item.
|
||||
* __cppc_scale_freq_tick - CPPC arch_freq_scale updater for frequency invariance
|
||||
* @cppc_fi: per-cpu CPPC FIE data.
|
||||
*
|
||||
* The CPPC driver register itself with the topology core to provide its own
|
||||
* The CPPC driver registers itself with the topology core to provide its own
|
||||
* implementation (cppc_scale_freq_tick()) of topology_scale_freq_tick() which
|
||||
* gets called by the scheduler on every tick.
|
||||
*
|
||||
* Note that the arch specific counters have higher priority than CPPC counters,
|
||||
* if available, though the CPPC driver doesn't need to have any special
|
||||
* handling for that.
|
||||
*
|
||||
* On an invocation of cppc_scale_freq_tick(), we schedule an irq work (since we
|
||||
* reach here from hard-irq context), which then schedules a normal work item
|
||||
* and cppc_scale_freq_workfn() updates the per_cpu arch_freq_scale variable
|
||||
* based on the counter updates since the last tick.
|
||||
*/
|
||||
static void cppc_scale_freq_workfn(struct kthread_work *work)
|
||||
static void __cppc_scale_freq_tick(struct cppc_freq_invariance *cppc_fi)
|
||||
{
|
||||
struct cppc_freq_invariance *cppc_fi;
|
||||
struct cppc_perf_fb_ctrs fb_ctrs = {0};
|
||||
struct cppc_cpudata *cpu_data;
|
||||
unsigned long local_freq_scale;
|
||||
u64 perf;
|
||||
|
||||
cppc_fi = container_of(work, struct cppc_freq_invariance, work);
|
||||
cpu_data = cppc_fi->cpu_data;
|
||||
|
||||
if (cppc_get_perf_ctrs(cppc_fi->cpu, &fb_ctrs)) {
|
||||
|
|
@ -102,6 +95,24 @@ static void cppc_scale_freq_workfn(struct kthread_work *work)
|
|||
per_cpu(arch_freq_scale, cppc_fi->cpu) = local_freq_scale;
|
||||
}
|
||||
|
||||
static void cppc_scale_freq_tick(void)
|
||||
{
|
||||
__cppc_scale_freq_tick(&per_cpu(cppc_freq_inv, smp_processor_id()));
|
||||
}
|
||||
|
||||
static struct scale_freq_data cppc_sftd = {
|
||||
.source = SCALE_FREQ_SOURCE_CPPC,
|
||||
.set_freq_scale = cppc_scale_freq_tick,
|
||||
};
|
||||
|
||||
static void cppc_scale_freq_workfn(struct kthread_work *work)
|
||||
{
|
||||
struct cppc_freq_invariance *cppc_fi;
|
||||
|
||||
cppc_fi = container_of(work, struct cppc_freq_invariance, work);
|
||||
__cppc_scale_freq_tick(cppc_fi);
|
||||
}
|
||||
|
||||
static void cppc_irq_work(struct irq_work *irq_work)
|
||||
{
|
||||
struct cppc_freq_invariance *cppc_fi;
|
||||
|
|
@ -110,7 +121,14 @@ static void cppc_irq_work(struct irq_work *irq_work)
|
|||
kthread_queue_work(kworker_fie, &cppc_fi->work);
|
||||
}
|
||||
|
||||
static void cppc_scale_freq_tick(void)
|
||||
/*
|
||||
* Reading perf counters may sleep if the CPC regs are in PCC. Thus, we
|
||||
* schedule an irq work in scale_freq_tick (since we reach here from hard-irq
|
||||
* context), which then schedules a normal work item cppc_scale_freq_workfn()
|
||||
* that updates the per_cpu arch_freq_scale variable based on the counter
|
||||
* updates since the last tick.
|
||||
*/
|
||||
static void cppc_scale_freq_tick_pcc(void)
|
||||
{
|
||||
struct cppc_freq_invariance *cppc_fi = &per_cpu(cppc_freq_inv, smp_processor_id());
|
||||
|
||||
|
|
@ -121,13 +139,14 @@ static void cppc_scale_freq_tick(void)
|
|||
irq_work_queue(&cppc_fi->irq_work);
|
||||
}
|
||||
|
||||
static struct scale_freq_data cppc_sftd = {
|
||||
static struct scale_freq_data cppc_sftd_pcc = {
|
||||
.source = SCALE_FREQ_SOURCE_CPPC,
|
||||
.set_freq_scale = cppc_scale_freq_tick,
|
||||
.set_freq_scale = cppc_scale_freq_tick_pcc,
|
||||
};
|
||||
|
||||
static void cppc_cpufreq_cpu_fie_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct scale_freq_data *sftd = &cppc_sftd;
|
||||
struct cppc_freq_invariance *cppc_fi;
|
||||
int cpu, ret;
|
||||
|
||||
|
|
@ -138,8 +157,11 @@ static void cppc_cpufreq_cpu_fie_init(struct cpufreq_policy *policy)
|
|||
cppc_fi = &per_cpu(cppc_freq_inv, cpu);
|
||||
cppc_fi->cpu = cpu;
|
||||
cppc_fi->cpu_data = policy->driver_data;
|
||||
kthread_init_work(&cppc_fi->work, cppc_scale_freq_workfn);
|
||||
init_irq_work(&cppc_fi->irq_work, cppc_irq_work);
|
||||
if (cppc_perf_ctrs_in_pcc_cpu(cpu)) {
|
||||
kthread_init_work(&cppc_fi->work, cppc_scale_freq_workfn);
|
||||
init_irq_work(&cppc_fi->irq_work, cppc_irq_work);
|
||||
sftd = &cppc_sftd_pcc;
|
||||
}
|
||||
|
||||
ret = cppc_get_perf_ctrs(cpu, &cppc_fi->prev_perf_fb_ctrs);
|
||||
|
||||
|
|
@ -155,7 +177,7 @@ static void cppc_cpufreq_cpu_fie_init(struct cpufreq_policy *policy)
|
|||
}
|
||||
|
||||
/* Register for freq-invariance */
|
||||
topology_set_scale_freq_source(&cppc_sftd, policy->cpus);
|
||||
topology_set_scale_freq_source(sftd, policy->cpus);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
@ -178,13 +200,15 @@ static void cppc_cpufreq_cpu_fie_exit(struct cpufreq_policy *policy)
|
|||
topology_clear_scale_freq_source(SCALE_FREQ_SOURCE_CPPC, policy->related_cpus);
|
||||
|
||||
for_each_cpu(cpu, policy->related_cpus) {
|
||||
if (!cppc_perf_ctrs_in_pcc_cpu(cpu))
|
||||
continue;
|
||||
cppc_fi = &per_cpu(cppc_freq_inv, cpu);
|
||||
irq_work_sync(&cppc_fi->irq_work);
|
||||
kthread_cancel_work_sync(&cppc_fi->work);
|
||||
}
|
||||
}
|
||||
|
||||
static void __init cppc_freq_invariance_init(void)
|
||||
static void cppc_fie_kworker_init(void)
|
||||
{
|
||||
struct sched_attr attr = {
|
||||
.size = sizeof(struct sched_attr),
|
||||
|
|
@ -201,22 +225,12 @@ static void __init cppc_freq_invariance_init(void)
|
|||
};
|
||||
int ret;
|
||||
|
||||
if (fie_disabled != FIE_ENABLED && fie_disabled != FIE_DISABLED) {
|
||||
fie_disabled = FIE_ENABLED;
|
||||
if (cppc_perf_ctrs_in_pcc()) {
|
||||
pr_info("FIE not enabled on systems with registers in PCC\n");
|
||||
fie_disabled = FIE_DISABLED;
|
||||
}
|
||||
}
|
||||
|
||||
if (fie_disabled)
|
||||
return;
|
||||
|
||||
kworker_fie = kthread_run_worker(0, "cppc_fie");
|
||||
if (IS_ERR(kworker_fie)) {
|
||||
pr_warn("%s: failed to create kworker_fie: %ld\n", __func__,
|
||||
PTR_ERR(kworker_fie));
|
||||
fie_disabled = FIE_DISABLED;
|
||||
kworker_fie = NULL;
|
||||
return;
|
||||
}
|
||||
|
||||
|
|
@ -226,15 +240,33 @@ static void __init cppc_freq_invariance_init(void)
|
|||
ret);
|
||||
kthread_destroy_worker(kworker_fie);
|
||||
fie_disabled = FIE_DISABLED;
|
||||
kworker_fie = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static void __init cppc_freq_invariance_init(void)
|
||||
{
|
||||
bool perf_ctrs_in_pcc = cppc_perf_ctrs_in_pcc();
|
||||
|
||||
if (fie_disabled == FIE_UNSET) {
|
||||
if (perf_ctrs_in_pcc) {
|
||||
pr_info("FIE not enabled on systems with registers in PCC\n");
|
||||
fie_disabled = FIE_DISABLED;
|
||||
} else {
|
||||
fie_disabled = FIE_ENABLED;
|
||||
}
|
||||
}
|
||||
|
||||
if (fie_disabled || !perf_ctrs_in_pcc)
|
||||
return;
|
||||
|
||||
cppc_fie_kworker_init();
|
||||
}
|
||||
|
||||
static void cppc_freq_invariance_exit(void)
|
||||
{
|
||||
if (fie_disabled)
|
||||
return;
|
||||
|
||||
kthread_destroy_worker(kworker_fie);
|
||||
if (kworker_fie)
|
||||
kthread_destroy_worker(kworker_fie);
|
||||
}
|
||||
|
||||
#else
|
||||
|
|
@ -831,14 +863,13 @@ static ssize_t store_auto_select(struct cpufreq_policy *policy,
|
|||
return count;
|
||||
}
|
||||
|
||||
static ssize_t show_auto_act_window(struct cpufreq_policy *policy, char *buf)
|
||||
static ssize_t cppc_cpufreq_sysfs_show_u64(unsigned int cpu,
|
||||
int (*get_func)(int, u64 *),
|
||||
char *buf)
|
||||
{
|
||||
u64 val;
|
||||
int ret;
|
||||
int ret = get_func((int)cpu, &val);
|
||||
|
||||
ret = cppc_get_auto_act_window(policy->cpu, &val);
|
||||
|
||||
/* show "<unsupported>" when this register is not supported by cpc */
|
||||
if (ret == -EOPNOTSUPP)
|
||||
return sysfs_emit(buf, "<unsupported>\n");
|
||||
|
||||
|
|
@ -848,42 +879,9 @@ static ssize_t show_auto_act_window(struct cpufreq_policy *policy, char *buf)
|
|||
return sysfs_emit(buf, "%llu\n", val);
|
||||
}
|
||||
|
||||
static ssize_t store_auto_act_window(struct cpufreq_policy *policy,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
u64 usec;
|
||||
int ret;
|
||||
|
||||
ret = kstrtou64(buf, 0, &usec);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = cppc_set_auto_act_window(policy->cpu, usec);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static ssize_t show_energy_performance_preference_val(struct cpufreq_policy *policy, char *buf)
|
||||
{
|
||||
u64 val;
|
||||
int ret;
|
||||
|
||||
ret = cppc_get_epp_perf(policy->cpu, &val);
|
||||
|
||||
/* show "<unsupported>" when this register is not supported by cpc */
|
||||
if (ret == -EOPNOTSUPP)
|
||||
return sysfs_emit(buf, "<unsupported>\n");
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return sysfs_emit(buf, "%llu\n", val);
|
||||
}
|
||||
|
||||
static ssize_t store_energy_performance_preference_val(struct cpufreq_policy *policy,
|
||||
const char *buf, size_t count)
|
||||
static ssize_t cppc_cpufreq_sysfs_store_u64(unsigned int cpu,
|
||||
int (*set_func)(int, u64),
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
u64 val;
|
||||
int ret;
|
||||
|
|
@ -892,13 +890,29 @@ static ssize_t store_energy_performance_preference_val(struct cpufreq_policy *po
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = cppc_set_epp(policy->cpu, val);
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = set_func((int)cpu, val);
|
||||
|
||||
return count;
|
||||
return ret ? ret : count;
|
||||
}
|
||||
|
||||
#define CPPC_CPUFREQ_ATTR_RW_U64(_name, _get_func, _set_func) \
|
||||
static ssize_t show_##_name(struct cpufreq_policy *policy, char *buf) \
|
||||
{ \
|
||||
return cppc_cpufreq_sysfs_show_u64(policy->cpu, _get_func, buf);\
|
||||
} \
|
||||
static ssize_t store_##_name(struct cpufreq_policy *policy, \
|
||||
const char *buf, size_t count) \
|
||||
{ \
|
||||
return cppc_cpufreq_sysfs_store_u64(policy->cpu, _set_func, \
|
||||
buf, count); \
|
||||
}
|
||||
|
||||
CPPC_CPUFREQ_ATTR_RW_U64(auto_act_window, cppc_get_auto_act_window,
|
||||
cppc_set_auto_act_window)
|
||||
|
||||
CPPC_CPUFREQ_ATTR_RW_U64(energy_performance_preference_val,
|
||||
cppc_get_epp_perf, cppc_set_epp)
|
||||
|
||||
cpufreq_freq_attr_ro(freqdomain_cpus);
|
||||
cpufreq_freq_attr_rw(auto_select);
|
||||
cpufreq_freq_attr_rw(auto_act_window);
|
||||
|
|
|
|||
|
|
@ -147,6 +147,8 @@ static const struct of_device_id blocklist[] __initconst = {
|
|||
{ .compatible = "nvidia,tegra30", },
|
||||
{ .compatible = "nvidia,tegra114", },
|
||||
{ .compatible = "nvidia,tegra124", },
|
||||
{ .compatible = "nvidia,tegra186", },
|
||||
{ .compatible = "nvidia,tegra194", },
|
||||
{ .compatible = "nvidia,tegra210", },
|
||||
{ .compatible = "nvidia,tegra234", },
|
||||
|
||||
|
|
@ -169,8 +171,11 @@ static const struct of_device_id blocklist[] __initconst = {
|
|||
{ .compatible = "qcom,sdm845", },
|
||||
{ .compatible = "qcom,sdx75", },
|
||||
{ .compatible = "qcom,sm6115", },
|
||||
{ .compatible = "qcom,sm6125", },
|
||||
{ .compatible = "qcom,sm6150", },
|
||||
{ .compatible = "qcom,sm6350", },
|
||||
{ .compatible = "qcom,sm6375", },
|
||||
{ .compatible = "qcom,sm7125", },
|
||||
{ .compatible = "qcom,sm7225", },
|
||||
{ .compatible = "qcom,sm7325", },
|
||||
{ .compatible = "qcom,sm8150", },
|
||||
|
|
@ -191,6 +196,7 @@ static const struct of_device_id blocklist[] __initconst = {
|
|||
{ .compatible = "ti,am625", },
|
||||
{ .compatible = "ti,am62a7", },
|
||||
{ .compatible = "ti,am62d2", },
|
||||
{ .compatible = "ti,am62l3", },
|
||||
{ .compatible = "ti,am62p5", },
|
||||
|
||||
{ .compatible = "qcom,ipq5332", },
|
||||
|
|
|
|||
|
|
@ -2803,7 +2803,7 @@ static int cpufreq_boost_trigger_state(int state)
|
|||
{
|
||||
struct cpufreq_policy *policy;
|
||||
unsigned long flags;
|
||||
int ret = 0;
|
||||
int ret = -EOPNOTSUPP;
|
||||
|
||||
/*
|
||||
* Don't compare 'cpufreq_driver->boost_enabled' with 'state' here to
|
||||
|
|
@ -2820,15 +2820,14 @@ static int cpufreq_boost_trigger_state(int state)
|
|||
continue;
|
||||
|
||||
ret = policy_set_boost(policy, state);
|
||||
if (ret)
|
||||
goto err_reset_state;
|
||||
if (unlikely(ret))
|
||||
break;
|
||||
}
|
||||
|
||||
cpus_read_unlock();
|
||||
|
||||
return 0;
|
||||
|
||||
err_reset_state:
|
||||
cpus_read_unlock();
|
||||
if (likely(!ret))
|
||||
return 0;
|
||||
|
||||
write_lock_irqsave(&cpufreq_driver_lock, flags);
|
||||
cpufreq_driver->boost_enabled = !state;
|
||||
|
|
|
|||
|
|
@ -334,17 +334,12 @@ static void od_free(struct policy_dbs_info *policy_dbs)
|
|||
static int od_init(struct dbs_data *dbs_data)
|
||||
{
|
||||
struct od_dbs_tuners *tuners;
|
||||
u64 idle_time;
|
||||
int cpu;
|
||||
|
||||
tuners = kzalloc(sizeof(*tuners), GFP_KERNEL);
|
||||
if (!tuners)
|
||||
return -ENOMEM;
|
||||
|
||||
cpu = get_cpu();
|
||||
idle_time = get_cpu_idle_time_us(cpu, NULL);
|
||||
put_cpu();
|
||||
if (idle_time != -1ULL) {
|
||||
if (tick_nohz_is_active()) {
|
||||
/* Idle micro accounting is supported. Use finer thresholds */
|
||||
dbs_data->up_threshold = MICRO_FREQUENCY_UP_THRESHOLD;
|
||||
} else {
|
||||
|
|
|
|||
|
|
@ -49,7 +49,9 @@ static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq)
|
|||
|
||||
static ssize_t show_speed(struct cpufreq_policy *policy, char *buf)
|
||||
{
|
||||
return sprintf(buf, "%u\n", policy->cur);
|
||||
struct userspace_policy *userspace = policy->governor_data;
|
||||
|
||||
return sprintf(buf, "%u\n", userspace->setspeed);
|
||||
}
|
||||
|
||||
static int cpufreq_userspace_policy_init(struct cpufreq_policy *policy)
|
||||
|
|
|
|||
|
|
@ -1161,7 +1161,7 @@ static void hybrid_init_cpu_capacity_scaling(bool refresh)
|
|||
* the capacity of SMT threads is not deterministic even approximately,
|
||||
* do not do that when SMT is in use.
|
||||
*/
|
||||
if (hwp_is_hybrid && !sched_smt_active() && arch_enable_hybrid_capacity_scale()) {
|
||||
if (hwp_is_hybrid && !cpu_smt_possible() && arch_enable_hybrid_capacity_scale()) {
|
||||
hybrid_refresh_cpu_capacity_scaling();
|
||||
/*
|
||||
* Disabling ITMT causes sched domains to be rebuilt to disable asym
|
||||
|
|
|
|||
|
|
@ -1,195 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* CPU frequency scaling for OMAP using OPP information
|
||||
*
|
||||
* Copyright (C) 2005 Nokia Corporation
|
||||
* Written by Tony Lindgren <tony@atomide.com>
|
||||
*
|
||||
* Based on cpu-sa1110.c, Copyright (C) 2001 Russell King
|
||||
*
|
||||
* Copyright (C) 2007-2011 Texas Instruments, Inc.
|
||||
* - OMAP3/4 support by Rajendra Nayak, Santosh Shilimkar
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/cpufreq.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/pm_opp.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/regulator/consumer.h>
|
||||
|
||||
/* OPP tolerance in percentage */
|
||||
#define OPP_TOLERANCE 4
|
||||
|
||||
static struct cpufreq_frequency_table *freq_table;
|
||||
static atomic_t freq_table_users = ATOMIC_INIT(0);
|
||||
static struct device *mpu_dev;
|
||||
static struct regulator *mpu_reg;
|
||||
|
||||
static int omap_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
{
|
||||
int r, ret;
|
||||
struct dev_pm_opp *opp;
|
||||
unsigned long freq, volt = 0, volt_old = 0, tol = 0;
|
||||
unsigned int old_freq, new_freq;
|
||||
|
||||
old_freq = policy->cur;
|
||||
new_freq = freq_table[index].frequency;
|
||||
|
||||
freq = new_freq * 1000;
|
||||
ret = clk_round_rate(policy->clk, freq);
|
||||
if (ret < 0) {
|
||||
dev_warn(mpu_dev,
|
||||
"CPUfreq: Cannot find matching frequency for %lu\n",
|
||||
freq);
|
||||
return ret;
|
||||
}
|
||||
freq = ret;
|
||||
|
||||
if (mpu_reg) {
|
||||
opp = dev_pm_opp_find_freq_ceil(mpu_dev, &freq);
|
||||
if (IS_ERR(opp)) {
|
||||
dev_err(mpu_dev, "%s: unable to find MPU OPP for %d\n",
|
||||
__func__, new_freq);
|
||||
return -EINVAL;
|
||||
}
|
||||
volt = dev_pm_opp_get_voltage(opp);
|
||||
dev_pm_opp_put(opp);
|
||||
tol = volt * OPP_TOLERANCE / 100;
|
||||
volt_old = regulator_get_voltage(mpu_reg);
|
||||
}
|
||||
|
||||
dev_dbg(mpu_dev, "cpufreq-omap: %u MHz, %ld mV --> %u MHz, %ld mV\n",
|
||||
old_freq / 1000, volt_old ? volt_old / 1000 : -1,
|
||||
new_freq / 1000, volt ? volt / 1000 : -1);
|
||||
|
||||
/* scaling up? scale voltage before frequency */
|
||||
if (mpu_reg && (new_freq > old_freq)) {
|
||||
r = regulator_set_voltage(mpu_reg, volt - tol, volt + tol);
|
||||
if (r < 0) {
|
||||
dev_warn(mpu_dev, "%s: unable to scale voltage up.\n",
|
||||
__func__);
|
||||
return r;
|
||||
}
|
||||
}
|
||||
|
||||
ret = clk_set_rate(policy->clk, new_freq * 1000);
|
||||
|
||||
/* scaling down? scale voltage after frequency */
|
||||
if (mpu_reg && (new_freq < old_freq)) {
|
||||
r = regulator_set_voltage(mpu_reg, volt - tol, volt + tol);
|
||||
if (r < 0) {
|
||||
dev_warn(mpu_dev, "%s: unable to scale voltage down.\n",
|
||||
__func__);
|
||||
clk_set_rate(policy->clk, old_freq * 1000);
|
||||
return r;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline void freq_table_free(void)
|
||||
{
|
||||
if (atomic_dec_and_test(&freq_table_users))
|
||||
dev_pm_opp_free_cpufreq_table(mpu_dev, &freq_table);
|
||||
}
|
||||
|
||||
static int omap_cpu_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
int result;
|
||||
|
||||
policy->clk = clk_get(NULL, "cpufreq_ck");
|
||||
if (IS_ERR(policy->clk))
|
||||
return PTR_ERR(policy->clk);
|
||||
|
||||
if (!freq_table) {
|
||||
result = dev_pm_opp_init_cpufreq_table(mpu_dev, &freq_table);
|
||||
if (result) {
|
||||
dev_err(mpu_dev,
|
||||
"%s: cpu%d: failed creating freq table[%d]\n",
|
||||
__func__, policy->cpu, result);
|
||||
clk_put(policy->clk);
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
atomic_inc_return(&freq_table_users);
|
||||
|
||||
/* FIXME: what's the actual transition time? */
|
||||
cpufreq_generic_init(policy, freq_table, 300 * 1000);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void omap_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
freq_table_free();
|
||||
clk_put(policy->clk);
|
||||
}
|
||||
|
||||
static struct cpufreq_driver omap_driver = {
|
||||
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = omap_target,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = omap_cpu_init,
|
||||
.exit = omap_cpu_exit,
|
||||
.register_em = cpufreq_register_em_with_opp,
|
||||
.name = "omap",
|
||||
};
|
||||
|
||||
static int omap_cpufreq_probe(struct platform_device *pdev)
|
||||
{
|
||||
mpu_dev = get_cpu_device(0);
|
||||
if (!mpu_dev) {
|
||||
pr_warn("%s: unable to get the MPU device\n", __func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mpu_reg = regulator_get(mpu_dev, "vcc");
|
||||
if (IS_ERR(mpu_reg)) {
|
||||
pr_warn("%s: unable to get MPU regulator\n", __func__);
|
||||
mpu_reg = NULL;
|
||||
} else {
|
||||
/*
|
||||
* Ensure physical regulator is present.
|
||||
* (e.g. could be dummy regulator.)
|
||||
*/
|
||||
if (regulator_get_voltage(mpu_reg) < 0) {
|
||||
pr_warn("%s: physical regulator not present for MPU\n",
|
||||
__func__);
|
||||
regulator_put(mpu_reg);
|
||||
mpu_reg = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
return cpufreq_register_driver(&omap_driver);
|
||||
}
|
||||
|
||||
static void omap_cpufreq_remove(struct platform_device *pdev)
|
||||
{
|
||||
cpufreq_unregister_driver(&omap_driver);
|
||||
}
|
||||
|
||||
static struct platform_driver omap_cpufreq_platdrv = {
|
||||
.driver = {
|
||||
.name = "omap-cpufreq",
|
||||
},
|
||||
.probe = omap_cpufreq_probe,
|
||||
.remove = omap_cpufreq_remove,
|
||||
};
|
||||
module_platform_driver(omap_cpufreq_platdrv);
|
||||
|
||||
MODULE_DESCRIPTION("cpufreq driver for OMAP SoCs");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
@ -3,7 +3,6 @@
|
|||
//! Rust based implementation of the cpufreq-dt driver.
|
||||
|
||||
use kernel::{
|
||||
c_str,
|
||||
clk::Clk,
|
||||
cpu, cpufreq,
|
||||
cpumask::CpumaskVar,
|
||||
|
|
@ -52,7 +51,7 @@ impl opp::ConfigOps for CPUFreqDTDriver {}
|
|||
|
||||
#[vtable]
|
||||
impl cpufreq::Driver for CPUFreqDTDriver {
|
||||
const NAME: &'static CStr = c_str!("cpufreq-dt");
|
||||
const NAME: &'static CStr = c"cpufreq-dt";
|
||||
const FLAGS: u16 = cpufreq::flags::NEED_INITIAL_FREQ_CHECK | cpufreq::flags::IS_COOLING_DEV;
|
||||
const BOOST_ENABLED: bool = true;
|
||||
|
||||
|
|
@ -197,7 +196,7 @@ kernel::of_device_table!(
|
|||
OF_TABLE,
|
||||
MODULE_OF_TABLE,
|
||||
<CPUFreqDTDriver as platform::Driver>::IdInfo,
|
||||
[(of::DeviceId::new(c_str!("operating-points-v2")), ())]
|
||||
[(of::DeviceId::new(c"operating-points-v2"), ())]
|
||||
);
|
||||
|
||||
impl platform::Driver for CPUFreqDTDriver {
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* System Control and Power Interface (SCMI) based CPUFreq Interface driver
|
||||
* System Control and Management Interface (SCMI) based CPUFreq Interface driver
|
||||
*
|
||||
* Copyright (C) 2018-2021 ARM Ltd.
|
||||
* Sudeep Holla <sudeep.holla@arm.com>
|
||||
|
|
@ -101,6 +101,7 @@ static int scmi_cpu_domain_id(struct device *cpu_dev)
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
of_node_put(domain_id.np);
|
||||
return domain_id.args[0];
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -70,6 +70,12 @@ enum {
|
|||
#define AM62A7_SUPPORT_R_MPU_OPP BIT(1)
|
||||
#define AM62A7_SUPPORT_V_MPU_OPP BIT(2)
|
||||
|
||||
#define AM62L3_EFUSE_E_MPU_OPP 5
|
||||
#define AM62L3_EFUSE_O_MPU_OPP 15
|
||||
|
||||
#define AM62L3_SUPPORT_E_MPU_OPP BIT(0)
|
||||
#define AM62L3_SUPPORT_O_MPU_OPP BIT(1)
|
||||
|
||||
#define AM62P5_EFUSE_O_MPU_OPP 15
|
||||
#define AM62P5_EFUSE_S_MPU_OPP 19
|
||||
#define AM62P5_EFUSE_T_MPU_OPP 20
|
||||
|
|
@ -213,6 +219,22 @@ static unsigned long am625_efuse_xlate(struct ti_cpufreq_data *opp_data,
|
|||
return calculated_efuse;
|
||||
}
|
||||
|
||||
static unsigned long am62l3_efuse_xlate(struct ti_cpufreq_data *opp_data,
|
||||
unsigned long efuse)
|
||||
{
|
||||
unsigned long calculated_efuse = AM62L3_SUPPORT_E_MPU_OPP;
|
||||
|
||||
switch (efuse) {
|
||||
case AM62L3_EFUSE_O_MPU_OPP:
|
||||
calculated_efuse |= AM62L3_SUPPORT_O_MPU_OPP;
|
||||
fallthrough;
|
||||
case AM62L3_EFUSE_E_MPU_OPP:
|
||||
calculated_efuse |= AM62L3_SUPPORT_E_MPU_OPP;
|
||||
}
|
||||
|
||||
return calculated_efuse;
|
||||
}
|
||||
|
||||
static struct ti_cpufreq_soc_data am3x_soc_data = {
|
||||
.efuse_xlate = amx3_efuse_xlate,
|
||||
.efuse_fallback = AM33XX_800M_ARM_MPU_MAX_FREQ,
|
||||
|
|
@ -313,8 +335,9 @@ static struct ti_cpufreq_soc_data am3517_soc_data = {
|
|||
static const struct soc_device_attribute k3_cpufreq_soc[] = {
|
||||
{ .family = "AM62X", },
|
||||
{ .family = "AM62AX", },
|
||||
{ .family = "AM62PX", },
|
||||
{ .family = "AM62DX", },
|
||||
{ .family = "AM62LX", },
|
||||
{ .family = "AM62PX", },
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
|
||||
|
|
@ -335,6 +358,14 @@ static struct ti_cpufreq_soc_data am62a7_soc_data = {
|
|||
.multi_regulator = false,
|
||||
};
|
||||
|
||||
static struct ti_cpufreq_soc_data am62l3_soc_data = {
|
||||
.efuse_xlate = am62l3_efuse_xlate,
|
||||
.efuse_offset = 0x0,
|
||||
.efuse_mask = 0x07c0,
|
||||
.efuse_shift = 0x6,
|
||||
.multi_regulator = false,
|
||||
};
|
||||
|
||||
static struct ti_cpufreq_soc_data am62p5_soc_data = {
|
||||
.efuse_xlate = am62p5_efuse_xlate,
|
||||
.efuse_offset = 0x0,
|
||||
|
|
@ -463,6 +494,7 @@ static const struct of_device_id ti_cpufreq_of_match[] __maybe_unused = {
|
|||
{ .compatible = "ti,am625", .data = &am625_soc_data, },
|
||||
{ .compatible = "ti,am62a7", .data = &am62a7_soc_data, },
|
||||
{ .compatible = "ti,am62d2", .data = &am62a7_soc_data, },
|
||||
{ .compatible = "ti,am62l3", .data = &am62l3_soc_data, },
|
||||
{ .compatible = "ti,am62p5", .data = &am62p5_soc_data, },
|
||||
/* legacy */
|
||||
{ .compatible = "ti,omap3430", .data = &omap34xx_soc_data, },
|
||||
|
|
|
|||
|
|
@ -239,7 +239,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
|||
|
||||
/* Find the shortest expected idle interval. */
|
||||
predicted_ns = get_typical_interval(data) * NSEC_PER_USEC;
|
||||
if (predicted_ns > RESIDENCY_THRESHOLD_NS) {
|
||||
if (predicted_ns > RESIDENCY_THRESHOLD_NS || tick_nohz_tick_stopped()) {
|
||||
unsigned int timer_us;
|
||||
|
||||
/* Determine the time till the closest timer. */
|
||||
|
|
@ -259,6 +259,16 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
|||
RESOLUTION * DECAY * NSEC_PER_USEC);
|
||||
/* Use the lowest expected idle interval to pick the idle state. */
|
||||
predicted_ns = min((u64)timer_us * NSEC_PER_USEC, predicted_ns);
|
||||
/*
|
||||
* If the tick is already stopped, the cost of possible short
|
||||
* idle duration misprediction is much higher, because the CPU
|
||||
* may be stuck in a shallow idle state for a long time as a
|
||||
* result of it. In that case, say we might mispredict and use
|
||||
* the known time till the closest timer event for the idle
|
||||
* state selection.
|
||||
*/
|
||||
if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
|
||||
predicted_ns = data->next_timer_ns;
|
||||
} else {
|
||||
/*
|
||||
* Because the next timer event is not going to be determined
|
||||
|
|
@ -271,7 +281,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
|||
data->bucket = BUCKETS - 1;
|
||||
}
|
||||
|
||||
if (unlikely(drv->state_count <= 1 || latency_req == 0) ||
|
||||
if (drv->state_count <= 1 || latency_req == 0 ||
|
||||
((data->next_timer_ns < drv->states[1].target_residency_ns ||
|
||||
latency_req < drv->states[1].exit_latency_ns) &&
|
||||
!dev->states_usage[0].disable)) {
|
||||
|
|
@ -284,16 +294,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
|||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the tick is already stopped, the cost of possible short idle
|
||||
* duration misprediction is much higher, because the CPU may be stuck
|
||||
* in a shallow idle state for a long time as a result of it. In that
|
||||
* case, say we might mispredict and use the known time till the closest
|
||||
* timer event for the idle state selection.
|
||||
*/
|
||||
if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
|
||||
predicted_ns = data->next_timer_ns;
|
||||
|
||||
/*
|
||||
* Find the idle state with the lowest power while satisfying
|
||||
* our constraints.
|
||||
|
|
|
|||
|
|
@ -48,12 +48,11 @@
|
|||
* in accordance with what happened last time.
|
||||
*
|
||||
* The "hits" metric reflects the relative frequency of situations in which the
|
||||
* sleep length and the idle duration measured after CPU wakeup fall into the
|
||||
* same bin (that is, the CPU appears to wake up "on time" relative to the sleep
|
||||
* length). In turn, the "intercepts" metric reflects the relative frequency of
|
||||
* non-timer wakeup events for which the measured idle duration falls into a bin
|
||||
* that corresponds to an idle state shallower than the one whose bin is fallen
|
||||
* into by the sleep length (these events are also referred to as "intercepts"
|
||||
* sleep length and the idle duration measured after CPU wakeup are close enough
|
||||
* (that is, the CPU appears to wake up "on time" relative to the sleep length).
|
||||
* In turn, the "intercepts" metric reflects the relative frequency of non-timer
|
||||
* wakeup events for which the measured idle duration is significantly different
|
||||
* from the sleep length (these events are also referred to as "intercepts"
|
||||
* below).
|
||||
*
|
||||
* The governor also counts "intercepts" with the measured idle duration below
|
||||
|
|
@ -75,12 +74,17 @@
|
|||
* than the candidate one (it represents the cases in which the CPU was
|
||||
* likely woken up by a non-timer wakeup source).
|
||||
*
|
||||
* Also find the idle state with the maximum intercepts metric (if there are
|
||||
* multiple states with the maximum intercepts metric, choose the one with
|
||||
* the highest index).
|
||||
*
|
||||
* 2. If the second sum computed in step 1 is greater than a half of the sum of
|
||||
* both metrics for the candidate state bin and all subsequent bins (if any),
|
||||
* a shallower idle state is likely to be more suitable, so look for it.
|
||||
*
|
||||
* - Traverse the enabled idle states shallower than the candidate one in the
|
||||
* descending order.
|
||||
* descending order, starting at the state with the maximum intercepts
|
||||
* metric found in step 1.
|
||||
*
|
||||
* - For each of them compute the sum of the "intercepts" metrics over all
|
||||
* of the idle states between it and the candidate one (including the
|
||||
|
|
@ -167,6 +171,7 @@ static void teo_decay(unsigned int *metric)
|
|||
*/
|
||||
static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
|
||||
{
|
||||
s64 lat_ns = drv->states[dev->last_state_idx].exit_latency_ns;
|
||||
struct teo_cpu *cpu_data = this_cpu_ptr(&teo_cpus);
|
||||
int i, idx_timer = 0, idx_duration = 0;
|
||||
s64 target_residency_ns, measured_ns;
|
||||
|
|
@ -182,8 +187,6 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
|
|||
*/
|
||||
measured_ns = S64_MAX;
|
||||
} else {
|
||||
s64 lat_ns = drv->states[dev->last_state_idx].exit_latency_ns;
|
||||
|
||||
measured_ns = dev->last_residency_ns;
|
||||
/*
|
||||
* The delay between the wakeup and the first instruction
|
||||
|
|
@ -239,15 +242,31 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
|
|||
cpu_data->state_bins[drv->state_count-1].hits += PULSE;
|
||||
return;
|
||||
}
|
||||
/*
|
||||
* If intercepts within the tick period range are not frequent
|
||||
* enough, count this wakeup as a hit, since it is likely that
|
||||
* the tick has woken up the CPU because an expected intercept
|
||||
* was not there. Otherwise, one of the intercepts may have
|
||||
* been incidentally preceded by the tick wakeup.
|
||||
*/
|
||||
if (3 * cpu_data->tick_intercepts < 2 * total) {
|
||||
cpu_data->state_bins[idx_timer].hits += PULSE;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If the measured idle duration falls into the same bin as the sleep
|
||||
* length, this is a "hit", so update the "hits" metric for that bin.
|
||||
* If the measured idle duration (adjusted for the entered state exit
|
||||
* latency) falls into the same bin as the sleep length and the latter
|
||||
* is less than the "raw" measured idle duration (so the wakeup appears
|
||||
* to have occurred after the anticipated timer event), this is a "hit",
|
||||
* so update the "hits" metric for that bin.
|
||||
*
|
||||
* Otherwise, update the "intercepts" metric for the bin fallen into by
|
||||
* the measured idle duration.
|
||||
*/
|
||||
if (idx_timer == idx_duration) {
|
||||
if (idx_timer == idx_duration &&
|
||||
cpu_data->sleep_length_ns - measured_ns < lat_ns / 2) {
|
||||
cpu_data->state_bins[idx_timer].hits += PULSE;
|
||||
} else {
|
||||
cpu_data->state_bins[idx_duration].intercepts += PULSE;
|
||||
|
|
@ -294,8 +313,10 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
|||
ktime_t delta_tick = TICK_NSEC / 2;
|
||||
unsigned int idx_intercept_sum = 0;
|
||||
unsigned int intercept_sum = 0;
|
||||
unsigned int intercept_max = 0;
|
||||
unsigned int idx_hit_sum = 0;
|
||||
unsigned int hit_sum = 0;
|
||||
int intercept_max_idx = -1;
|
||||
int constraint_idx = 0;
|
||||
int idx0 = 0, idx = -1;
|
||||
s64 duration_ns;
|
||||
|
|
@ -326,17 +347,32 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
|||
if (!dev->states_usage[0].disable)
|
||||
idx = 0;
|
||||
|
||||
/* Compute the sums of metrics for early wakeup pattern detection. */
|
||||
/*
|
||||
* Compute the sums of metrics for early wakeup pattern detection and
|
||||
* look for the state bin with the maximum intercepts metric below the
|
||||
* deepest enabled one (if there are multiple states with the maximum
|
||||
* intercepts metric, choose the one with the highest index).
|
||||
*/
|
||||
for (i = 1; i < drv->state_count; i++) {
|
||||
struct teo_bin *prev_bin = &cpu_data->state_bins[i-1];
|
||||
unsigned int prev_intercepts = prev_bin->intercepts;
|
||||
struct cpuidle_state *s = &drv->states[i];
|
||||
|
||||
/*
|
||||
* Update the sums of idle state metrics for all of the states
|
||||
* shallower than the current one.
|
||||
*/
|
||||
intercept_sum += prev_bin->intercepts;
|
||||
hit_sum += prev_bin->hits;
|
||||
intercept_sum += prev_intercepts;
|
||||
/*
|
||||
* Check if this is the bin with the maximum number of
|
||||
* intercepts so far and in that case update the index of
|
||||
* the state with the maximum intercepts metric.
|
||||
*/
|
||||
if (prev_intercepts >= intercept_max) {
|
||||
intercept_max = prev_intercepts;
|
||||
intercept_max_idx = i - 1;
|
||||
}
|
||||
|
||||
if (dev->states_usage[i].disable)
|
||||
continue;
|
||||
|
|
@ -388,12 +424,34 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
|||
while (min_idx < idx &&
|
||||
drv->states[min_idx].target_residency_ns < TICK_NSEC)
|
||||
min_idx++;
|
||||
|
||||
/*
|
||||
* Avoid selecting a state with a lower index, but with
|
||||
* the same target residency as the current candidate
|
||||
* one.
|
||||
*/
|
||||
if (drv->states[min_idx].target_residency_ns ==
|
||||
drv->states[idx].target_residency_ns)
|
||||
goto constraint;
|
||||
}
|
||||
|
||||
/*
|
||||
* Look for the deepest idle state whose target residency had
|
||||
* not exceeded the idle duration in over a half of the relevant
|
||||
* cases in the past.
|
||||
* If the minimum state index is greater than or equal to the
|
||||
* index of the state with the maximum intercepts metric and
|
||||
* the corresponding state is enabled, there is no need to look
|
||||
* at the deeper states.
|
||||
*/
|
||||
if (min_idx >= intercept_max_idx &&
|
||||
!dev->states_usage[min_idx].disable) {
|
||||
idx = min_idx;
|
||||
goto constraint;
|
||||
}
|
||||
|
||||
/*
|
||||
* Look for the deepest enabled idle state, at most as deep as
|
||||
* the one with the maximum intercepts metric, whose target
|
||||
* residency had not been greater than the idle duration in over
|
||||
* a half of the relevant cases in the past.
|
||||
*
|
||||
* Take the possible duration limitation present if the tick
|
||||
* has been stopped already into account.
|
||||
|
|
@ -405,11 +463,13 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
|||
continue;
|
||||
|
||||
idx = i;
|
||||
if (2 * intercept_sum > idx_intercept_sum)
|
||||
if (2 * intercept_sum > idx_intercept_sum &&
|
||||
i <= intercept_max_idx)
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
constraint:
|
||||
/*
|
||||
* If there is a latency constraint, it may be necessary to select an
|
||||
* idle state shallower than the current candidate one.
|
||||
|
|
@ -464,7 +524,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
|||
* total wakeup events, do not stop the tick.
|
||||
*/
|
||||
if (drv->states[idx].target_residency_ns < TICK_NSEC &&
|
||||
cpu_data->tick_intercepts > cpu_data->total / 2 + cpu_data->total / 8)
|
||||
3 * cpu_data->tick_intercepts >= 2 * cpu_data->total)
|
||||
duration_ns = TICK_NSEC / 2;
|
||||
|
||||
end:
|
||||
|
|
|
|||
|
|
@ -77,7 +77,6 @@ static void malidp_crtc_atomic_disable(struct drm_crtc *crtc,
|
|||
crtc);
|
||||
struct malidp_drm *malidp = crtc_to_malidp_device(crtc);
|
||||
struct malidp_hw_device *hwdev = malidp->dev;
|
||||
int err;
|
||||
|
||||
/* always disable planes on the CRTC that is being turned off */
|
||||
drm_atomic_helper_disable_planes_on_crtc(old_state, false);
|
||||
|
|
@ -87,10 +86,7 @@ static void malidp_crtc_atomic_disable(struct drm_crtc *crtc,
|
|||
|
||||
clk_disable_unprepare(hwdev->pxlclk);
|
||||
|
||||
err = pm_runtime_put(crtc->dev->dev);
|
||||
if (err < 0) {
|
||||
DRM_DEBUG_DRIVER("Failed to disable runtime power management: %d\n", err);
|
||||
}
|
||||
pm_runtime_put(crtc->dev->dev);
|
||||
}
|
||||
|
||||
static const struct gamma_curve_segment {
|
||||
|
|
|
|||
|
|
@ -280,9 +280,7 @@ static void imx8qm_ldb_bridge_atomic_disable(struct drm_bridge *bridge,
|
|||
clk_disable_unprepare(imx8qm_ldb->clk_bypass);
|
||||
clk_disable_unprepare(imx8qm_ldb->clk_pixel);
|
||||
|
||||
ret = pm_runtime_put(dev);
|
||||
if (ret < 0)
|
||||
DRM_DEV_ERROR(dev, "failed to put runtime PM: %d\n", ret);
|
||||
pm_runtime_put(dev);
|
||||
}
|
||||
|
||||
static const u32 imx8qm_ldb_bus_output_fmts[] = {
|
||||
|
|
|
|||
|
|
@ -282,9 +282,7 @@ static void imx8qxp_ldb_bridge_atomic_disable(struct drm_bridge *bridge,
|
|||
if (is_split && companion)
|
||||
companion->funcs->atomic_disable(companion, state);
|
||||
|
||||
ret = pm_runtime_put(dev);
|
||||
if (ret < 0)
|
||||
DRM_DEV_ERROR(dev, "failed to put runtime PM: %d\n", ret);
|
||||
pm_runtime_put(dev);
|
||||
}
|
||||
|
||||
static const u32 imx8qxp_ldb_bus_output_fmts[] = {
|
||||
|
|
|
|||
|
|
@ -181,11 +181,8 @@ static void imx8qxp_pc_bridge_atomic_disable(struct drm_bridge *bridge,
|
|||
{
|
||||
struct imx8qxp_pc_channel *ch = bridge->driver_private;
|
||||
struct imx8qxp_pc *pc = ch->pc;
|
||||
int ret;
|
||||
|
||||
ret = pm_runtime_put(pc->dev);
|
||||
if (ret < 0)
|
||||
DRM_DEV_ERROR(pc->dev, "failed to put runtime PM: %d\n", ret);
|
||||
pm_runtime_put(pc->dev);
|
||||
}
|
||||
|
||||
static const u32 imx8qxp_pc_bus_output_fmts[] = {
|
||||
|
|
|
|||
|
|
@ -127,11 +127,8 @@ static void imx8qxp_pxl2dpi_bridge_atomic_disable(struct drm_bridge *bridge,
|
|||
struct drm_atomic_state *state)
|
||||
{
|
||||
struct imx8qxp_pxl2dpi *p2d = bridge->driver_private;
|
||||
int ret;
|
||||
|
||||
ret = pm_runtime_put(p2d->dev);
|
||||
if (ret < 0)
|
||||
DRM_DEV_ERROR(p2d->dev, "failed to put runtime PM: %d\n", ret);
|
||||
pm_runtime_put(p2d->dev);
|
||||
|
||||
if (p2d->companion)
|
||||
p2d->companion->funcs->atomic_disable(p2d->companion, state);
|
||||
|
|
|
|||
|
|
@ -30,12 +30,12 @@ pvr_power_get(struct pvr_device *pvr_dev)
|
|||
return pm_runtime_resume_and_get(drm_dev->dev);
|
||||
}
|
||||
|
||||
static __always_inline int
|
||||
static __always_inline void
|
||||
pvr_power_put(struct pvr_device *pvr_dev)
|
||||
{
|
||||
struct drm_device *drm_dev = from_pvr_device(pvr_dev);
|
||||
|
||||
return pm_runtime_put(drm_dev->dev);
|
||||
pm_runtime_put(drm_dev->dev);
|
||||
}
|
||||
|
||||
int pvr_power_domains_init(struct pvr_device *pvr_dev);
|
||||
|
|
|
|||
|
|
@ -300,7 +300,7 @@ dc_crtc_atomic_disable(struct drm_crtc *crtc, struct drm_atomic_state *state)
|
|||
drm_atomic_get_new_crtc_state(state, crtc);
|
||||
struct dc_drm_device *dc_drm = to_dc_drm_device(crtc->dev);
|
||||
struct dc_crtc *dc_crtc = to_dc_crtc(crtc);
|
||||
int idx, ret;
|
||||
int idx;
|
||||
|
||||
if (!drm_dev_enter(crtc->dev, &idx))
|
||||
goto out;
|
||||
|
|
@ -313,16 +313,10 @@ dc_crtc_atomic_disable(struct drm_crtc *crtc, struct drm_atomic_state *state)
|
|||
dc_fg_disable_clock(dc_crtc->fg);
|
||||
|
||||
/* request pixel engine power-off as plane is off too */
|
||||
ret = pm_runtime_put(dc_drm->pe->dev);
|
||||
if (ret)
|
||||
dc_crtc_err(crtc, "failed to put DC pixel engine RPM: %d\n",
|
||||
ret);
|
||||
pm_runtime_put(dc_drm->pe->dev);
|
||||
|
||||
/* request display engine power-off when CRTC is disabled */
|
||||
ret = pm_runtime_put(dc_crtc->de->dev);
|
||||
if (ret < 0)
|
||||
dc_crtc_err(crtc, "failed to put DC display engine RPM: %d\n",
|
||||
ret);
|
||||
pm_runtime_put(dc_crtc->de->dev);
|
||||
|
||||
drm_dev_exit(idx);
|
||||
|
||||
|
|
|
|||
|
|
@ -848,7 +848,6 @@ static void vc4_hdmi_encoder_post_crtc_powerdown(struct drm_encoder *encoder,
|
|||
struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);
|
||||
struct drm_device *drm = vc4_hdmi->connector.dev;
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
int idx;
|
||||
|
||||
mutex_lock(&vc4_hdmi->mutex);
|
||||
|
|
@ -867,9 +866,7 @@ static void vc4_hdmi_encoder_post_crtc_powerdown(struct drm_encoder *encoder,
|
|||
clk_disable_unprepare(vc4_hdmi->pixel_bvb_clock);
|
||||
clk_disable_unprepare(vc4_hdmi->pixel_clock);
|
||||
|
||||
ret = pm_runtime_put(&vc4_hdmi->pdev->dev);
|
||||
if (ret < 0)
|
||||
drm_err(drm, "Failed to release power domain: %d\n", ret);
|
||||
pm_runtime_put(&vc4_hdmi->pdev->dev);
|
||||
|
||||
drm_dev_exit(idx);
|
||||
|
||||
|
|
|
|||
|
|
@ -542,7 +542,7 @@ static void vc4_vec_encoder_disable(struct drm_encoder *encoder,
|
|||
{
|
||||
struct drm_device *drm = encoder->dev;
|
||||
struct vc4_vec *vec = encoder_to_vc4_vec(encoder);
|
||||
int idx, ret;
|
||||
int idx;
|
||||
|
||||
if (!drm_dev_enter(drm, &idx))
|
||||
return;
|
||||
|
|
@ -556,17 +556,9 @@ static void vc4_vec_encoder_disable(struct drm_encoder *encoder,
|
|||
|
||||
clk_disable_unprepare(vec->clock);
|
||||
|
||||
ret = pm_runtime_put(&vec->pdev->dev);
|
||||
if (ret < 0) {
|
||||
drm_err(drm, "Failed to release power domain: %d\n", ret);
|
||||
goto err_dev_exit;
|
||||
}
|
||||
pm_runtime_put(&vec->pdev->dev);
|
||||
|
||||
drm_dev_exit(idx);
|
||||
return;
|
||||
|
||||
err_dev_exit:
|
||||
drm_dev_exit(idx);
|
||||
}
|
||||
|
||||
static void vc4_vec_encoder_enable(struct drm_encoder *encoder,
|
||||
|
|
|
|||
|
|
@ -101,9 +101,7 @@ static int omap_hwspinlock_probe(struct platform_device *pdev)
|
|||
* runtime PM will make sure the clock of this module is
|
||||
* enabled again iff at least one lock is requested
|
||||
*/
|
||||
ret = pm_runtime_put(&pdev->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
pm_runtime_put(&pdev->dev);
|
||||
|
||||
/* one of the four lsb's must be set, and nothing else */
|
||||
if (hweight_long(i & 0xf) != 1 || i > 8)
|
||||
|
|
|
|||
|
|
@ -451,10 +451,10 @@ err:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int debug_disable_func(void)
|
||||
static void debug_disable_func(void)
|
||||
{
|
||||
struct debug_drvdata *drvdata;
|
||||
int cpu, ret, err = 0;
|
||||
int cpu;
|
||||
|
||||
/*
|
||||
* Disable debug power domains, records the error and keep
|
||||
|
|
@ -466,12 +466,8 @@ static int debug_disable_func(void)
|
|||
if (!drvdata)
|
||||
continue;
|
||||
|
||||
ret = pm_runtime_put(drvdata->dev);
|
||||
if (ret < 0)
|
||||
err = ret;
|
||||
pm_runtime_put(drvdata->dev);
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static ssize_t debug_func_knob_write(struct file *f,
|
||||
|
|
@ -492,7 +488,7 @@ static ssize_t debug_func_knob_write(struct file *f,
|
|||
if (val)
|
||||
ret = debug_enable_func();
|
||||
else
|
||||
ret = debug_disable_func();
|
||||
debug_disable_func();
|
||||
|
||||
if (ret) {
|
||||
pr_err("%s: unable to %s debug function: %d\n",
|
||||
|
|
|
|||
|
|
@ -45,6 +45,7 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/cpuidle.h>
|
||||
#include <linux/tick.h>
|
||||
#include <linux/time64.h>
|
||||
#include <trace/events/power.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/sched/smt.h>
|
||||
|
|
@ -63,8 +64,6 @@
|
|||
#include <asm/fpu/api.h>
|
||||
#include <asm/smp.h>
|
||||
|
||||
#define INTEL_IDLE_VERSION "0.5.1"
|
||||
|
||||
static struct cpuidle_driver intel_idle_driver = {
|
||||
.name = "intel_idle",
|
||||
.owner = THIS_MODULE,
|
||||
|
|
@ -72,10 +71,18 @@ static struct cpuidle_driver intel_idle_driver = {
|
|||
/* intel_idle.max_cstate=0 disables driver */
|
||||
static int max_cstate = CPUIDLE_STATE_MAX - 1;
|
||||
static unsigned int disabled_states_mask __read_mostly;
|
||||
static unsigned int preferred_states_mask __read_mostly;
|
||||
static bool force_irq_on __read_mostly;
|
||||
static bool ibrs_off __read_mostly;
|
||||
|
||||
/* The maximum allowed length for the 'table' module parameter */
|
||||
#define MAX_CMDLINE_TABLE_LEN 256
|
||||
/* Maximum allowed C-state latency */
|
||||
#define MAX_CMDLINE_LATENCY_US (5 * USEC_PER_MSEC)
|
||||
/* Maximum allowed C-state target residency */
|
||||
#define MAX_CMDLINE_RESIDENCY_US (100 * USEC_PER_MSEC)
|
||||
|
||||
static char cmdline_table_str[MAX_CMDLINE_TABLE_LEN] __read_mostly;
|
||||
|
||||
static struct cpuidle_device __percpu *intel_idle_cpuidle_devices;
|
||||
|
||||
static unsigned long auto_demotion_disable_flags;
|
||||
|
|
@ -107,6 +114,9 @@ static struct device *sysfs_root __initdata;
|
|||
static const struct idle_cpu *icpu __initdata;
|
||||
static struct cpuidle_state *cpuidle_state_table __initdata;
|
||||
|
||||
/* C-states data from the 'intel_idle.table' cmdline parameter */
|
||||
static struct cpuidle_state cmdline_states[CPUIDLE_STATE_MAX] __initdata;
|
||||
|
||||
static unsigned int mwait_substates __initdata;
|
||||
|
||||
/*
|
||||
|
|
@ -2051,25 +2061,6 @@ static void __init skx_idle_state_table_update(void)
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* adl_idle_state_table_update - Adjust AlderLake idle states table.
|
||||
*/
|
||||
static void __init adl_idle_state_table_update(void)
|
||||
{
|
||||
/* Check if user prefers C1 over C1E. */
|
||||
if (preferred_states_mask & BIT(1) && !(preferred_states_mask & BIT(2))) {
|
||||
cpuidle_state_table[0].flags &= ~CPUIDLE_FLAG_UNUSABLE;
|
||||
cpuidle_state_table[1].flags |= CPUIDLE_FLAG_UNUSABLE;
|
||||
|
||||
/* Disable C1E by clearing the "C1E promotion" bit. */
|
||||
c1e_promotion = C1E_PROMOTION_DISABLE;
|
||||
return;
|
||||
}
|
||||
|
||||
/* Make sure C1E is enabled by default */
|
||||
c1e_promotion = C1E_PROMOTION_ENABLE;
|
||||
}
|
||||
|
||||
/**
|
||||
* spr_idle_state_table_update - Adjust Sapphire Rapids idle states table.
|
||||
*/
|
||||
|
|
@ -2176,11 +2167,6 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
|
|||
case INTEL_EMERALDRAPIDS_X:
|
||||
spr_idle_state_table_update();
|
||||
break;
|
||||
case INTEL_ALDERLAKE:
|
||||
case INTEL_ALDERLAKE_L:
|
||||
case INTEL_ATOM_GRACEMONT:
|
||||
adl_idle_state_table_update();
|
||||
break;
|
||||
case INTEL_ATOM_SILVERMONT:
|
||||
case INTEL_ATOM_AIRMONT:
|
||||
byt_cht_auto_demotion_disable();
|
||||
|
|
@ -2420,6 +2406,197 @@ static void __init intel_idle_sysfs_uninit(void)
|
|||
put_device(sysfs_root);
|
||||
}
|
||||
|
||||
/**
|
||||
* get_cmdline_field - Get the current field from a cmdline string.
|
||||
* @args: The cmdline string to get the current field from.
|
||||
* @field: Pointer to the current field upon return.
|
||||
* @sep: The fields separator character.
|
||||
*
|
||||
* Examples:
|
||||
* Input: args="C1:1:1,C1E:2:10", sep=':'
|
||||
* Output: field="C1", return "1:1,C1E:2:10"
|
||||
* Input: args="C1:1:1,C1E:2:10", sep=','
|
||||
* Output: field="C1:1:1", return "C1E:2:10"
|
||||
* Ipnut: args="::", sep=':'
|
||||
* Output: field="", return ":"
|
||||
*
|
||||
* Return: The continuation of the cmdline string after the field or NULL.
|
||||
*/
|
||||
static char *get_cmdline_field(char *args, char **field, char sep)
|
||||
{
|
||||
unsigned int i;
|
||||
|
||||
for (i = 0; args[i] && !isspace(args[i]); i++) {
|
||||
if (args[i] == sep)
|
||||
break;
|
||||
}
|
||||
|
||||
*field = args;
|
||||
|
||||
if (args[i] != sep)
|
||||
return NULL;
|
||||
|
||||
args[i] = '\0';
|
||||
return args + i + 1;
|
||||
}
|
||||
|
||||
/**
|
||||
* validate_cmdline_cstate - Validate a C-state from cmdline.
|
||||
* @state: The C-state to validate.
|
||||
* @prev_state: The previous C-state in the table or NULL.
|
||||
*
|
||||
* Return: 0 if the C-state is valid or -EINVAL otherwise.
|
||||
*/
|
||||
static int validate_cmdline_cstate(struct cpuidle_state *state,
|
||||
struct cpuidle_state *prev_state)
|
||||
{
|
||||
if (state->exit_latency == 0)
|
||||
/* Exit latency 0 can only be used for the POLL state */
|
||||
return -EINVAL;
|
||||
|
||||
if (state->exit_latency > MAX_CMDLINE_LATENCY_US)
|
||||
return -EINVAL;
|
||||
|
||||
if (state->target_residency > MAX_CMDLINE_RESIDENCY_US)
|
||||
return -EINVAL;
|
||||
|
||||
if (state->target_residency < state->exit_latency)
|
||||
return -EINVAL;
|
||||
|
||||
if (!prev_state)
|
||||
return 0;
|
||||
|
||||
if (state->exit_latency <= prev_state->exit_latency)
|
||||
return -EINVAL;
|
||||
|
||||
if (state->target_residency <= prev_state->target_residency)
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* cmdline_table_adjust - Adjust the C-states table with data from cmdline.
|
||||
* @drv: cpuidle driver (assumed to point to intel_idle_driver).
|
||||
*
|
||||
* Adjust the C-states table with data from the 'intel_idle.table' module
|
||||
* parameter (if specified).
|
||||
*/
|
||||
static void __init cmdline_table_adjust(struct cpuidle_driver *drv)
|
||||
{
|
||||
char *args = cmdline_table_str;
|
||||
struct cpuidle_state *state;
|
||||
int i;
|
||||
|
||||
if (args[0] == '\0')
|
||||
/* The 'intel_idle.table' module parameter was not specified */
|
||||
return;
|
||||
|
||||
/* Create a copy of the C-states table */
|
||||
for (i = 0; i < drv->state_count; i++)
|
||||
cmdline_states[i] = drv->states[i];
|
||||
|
||||
/*
|
||||
* Adjust the C-states table copy with data from the 'intel_idle.table'
|
||||
* module parameter.
|
||||
*/
|
||||
while (args) {
|
||||
char *fields, *name, *val;
|
||||
|
||||
/*
|
||||
* Get the next C-state definition, which is expected to be
|
||||
* '<name>:<latency_us>:<target_residency_us>'. Treat "empty"
|
||||
* fields as unchanged. For example,
|
||||
* '<name>::<target_residency_us>' leaves the latency unchanged.
|
||||
*/
|
||||
args = get_cmdline_field(args, &fields, ',');
|
||||
|
||||
/* name */
|
||||
fields = get_cmdline_field(fields, &name, ':');
|
||||
if (!fields)
|
||||
goto error;
|
||||
|
||||
if (!strcmp(name, "POLL")) {
|
||||
pr_err("Cannot adjust POLL\n");
|
||||
continue;
|
||||
}
|
||||
|
||||
/* Find the C-state by its name */
|
||||
state = NULL;
|
||||
for (i = 0; i < drv->state_count; i++) {
|
||||
if (!strcmp(name, drv->states[i].name)) {
|
||||
state = &cmdline_states[i];
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!state) {
|
||||
pr_err("C-state '%s' was not found\n", name);
|
||||
continue;
|
||||
}
|
||||
|
||||
/* Latency */
|
||||
fields = get_cmdline_field(fields, &val, ':');
|
||||
if (!fields)
|
||||
goto error;
|
||||
|
||||
if (*val) {
|
||||
if (kstrtouint(val, 0, &state->exit_latency))
|
||||
goto error;
|
||||
}
|
||||
|
||||
/* Target residency */
|
||||
fields = get_cmdline_field(fields, &val, ':');
|
||||
|
||||
if (*val) {
|
||||
if (kstrtouint(val, 0, &state->target_residency))
|
||||
goto error;
|
||||
}
|
||||
|
||||
/*
|
||||
* Allow for 3 more fields, but ignore them. Helps to make
|
||||
* possible future extensions of the cmdline format backward
|
||||
* compatible.
|
||||
*/
|
||||
for (i = 0; fields && i < 3; i++) {
|
||||
fields = get_cmdline_field(fields, &val, ':');
|
||||
if (!fields)
|
||||
break;
|
||||
}
|
||||
|
||||
if (fields) {
|
||||
pr_err("Too many fields for C-state '%s'\n", state->name);
|
||||
goto error;
|
||||
}
|
||||
|
||||
pr_info("C-state from cmdline: name=%s, latency=%u, residency=%u\n",
|
||||
state->name, state->exit_latency, state->target_residency);
|
||||
}
|
||||
|
||||
/* Validate the adjusted C-states, start with index 1 to skip POLL */
|
||||
for (i = 1; i < drv->state_count; i++) {
|
||||
struct cpuidle_state *prev_state;
|
||||
|
||||
state = &cmdline_states[i];
|
||||
prev_state = &cmdline_states[i - 1];
|
||||
|
||||
if (validate_cmdline_cstate(state, prev_state)) {
|
||||
pr_err("C-state '%s' validation failed\n", state->name);
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
|
||||
/* Copy the adjusted C-states table back */
|
||||
for (i = 1; i < drv->state_count; i++)
|
||||
drv->states[i] = cmdline_states[i];
|
||||
|
||||
pr_info("Adjusted C-states with data from 'intel_idle.table'\n");
|
||||
return;
|
||||
|
||||
error:
|
||||
pr_info("Failed to adjust C-states with data from 'intel_idle.table'\n");
|
||||
}
|
||||
|
||||
static int __init intel_idle_init(void)
|
||||
{
|
||||
const struct x86_cpu_id *id;
|
||||
|
|
@ -2478,19 +2655,17 @@ static int __init intel_idle_init(void)
|
|||
return -ENODEV;
|
||||
}
|
||||
|
||||
pr_debug("v" INTEL_IDLE_VERSION " model 0x%X\n",
|
||||
boot_cpu_data.x86_model);
|
||||
|
||||
intel_idle_cpuidle_devices = alloc_percpu(struct cpuidle_device);
|
||||
if (!intel_idle_cpuidle_devices)
|
||||
return -ENOMEM;
|
||||
|
||||
intel_idle_cpuidle_driver_init(&intel_idle_driver);
|
||||
cmdline_table_adjust(&intel_idle_driver);
|
||||
|
||||
retval = intel_idle_sysfs_init();
|
||||
if (retval)
|
||||
pr_warn("failed to initialized sysfs");
|
||||
|
||||
intel_idle_cpuidle_driver_init(&intel_idle_driver);
|
||||
|
||||
retval = cpuidle_register_driver(&intel_idle_driver);
|
||||
if (retval) {
|
||||
struct cpuidle_driver *drv = cpuidle_get_driver();
|
||||
|
|
@ -2537,17 +2712,6 @@ module_param(max_cstate, int, 0444);
|
|||
*/
|
||||
module_param_named(states_off, disabled_states_mask, uint, 0444);
|
||||
MODULE_PARM_DESC(states_off, "Mask of disabled idle states");
|
||||
/*
|
||||
* Some platforms come with mutually exclusive C-states, so that if one is
|
||||
* enabled, the other C-states must not be used. Example: C1 and C1E on
|
||||
* Sapphire Rapids platform. This parameter allows for selecting the
|
||||
* preferred C-states among the groups of mutually exclusive C-states - the
|
||||
* selected C-states will be registered, the other C-states from the mutually
|
||||
* exclusive group won't be registered. If the platform has no mutually
|
||||
* exclusive C-states, this parameter has no effect.
|
||||
*/
|
||||
module_param_named(preferred_cstates, preferred_states_mask, uint, 0444);
|
||||
MODULE_PARM_DESC(preferred_cstates, "Mask of preferred idle states");
|
||||
/*
|
||||
* Debugging option that forces the driver to enter all C-states with
|
||||
* interrupts enabled. Does not apply to C-states with
|
||||
|
|
@ -2560,3 +2724,21 @@ module_param(force_irq_on, bool, 0444);
|
|||
*/
|
||||
module_param(ibrs_off, bool, 0444);
|
||||
MODULE_PARM_DESC(ibrs_off, "Disable IBRS when idle");
|
||||
|
||||
/*
|
||||
* Define the C-states table from a user input string. Expected format is
|
||||
* 'name:latency:residency', where:
|
||||
* - name: The C-state name.
|
||||
* - latency: The C-state exit latency in us.
|
||||
* - residency: The C-state target residency in us.
|
||||
*
|
||||
* Multiple C-states can be defined by separating them with commas:
|
||||
* 'name1:latency1:residency1,name2:latency2:residency2'
|
||||
*
|
||||
* Example: intel_idle.table=C1:1:1,C1E:5:10,C6:100:600
|
||||
*
|
||||
* To leave latency or residency unchanged, use an empty field, for example:
|
||||
* 'C1:1:1,C1E::10' - leaves C1E latency unchanged.
|
||||
*/
|
||||
module_param_string(table, cmdline_table_str, MAX_CMDLINE_TABLE_LEN, 0444);
|
||||
MODULE_PARM_DESC(table, "Build the C-states table from a user input string");
|
||||
|
|
|
|||
|
|
@ -1974,7 +1974,9 @@ static int ccs_post_streamoff(struct v4l2_subdev *subdev)
|
|||
struct ccs_sensor *sensor = to_ccs_sensor(subdev);
|
||||
struct i2c_client *client = v4l2_get_subdevdata(&sensor->src->sd);
|
||||
|
||||
return pm_runtime_put(&client->dev);
|
||||
pm_runtime_put(&client->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ccs_enum_mbus_code(struct v4l2_subdev *subdev,
|
||||
|
|
|
|||
|
|
@ -241,7 +241,7 @@ unsigned int dev_pm_opp_get_level(struct dev_pm_opp *opp)
|
|||
{
|
||||
if (IS_ERR_OR_NULL(opp) || !opp->available) {
|
||||
pr_err("%s: Invalid parameters\n", __func__);
|
||||
return 0;
|
||||
return U32_MAX;
|
||||
}
|
||||
|
||||
return opp->level;
|
||||
|
|
|
|||
|
|
@ -956,7 +956,6 @@ free_opp:
|
|||
/* Initializes OPP tables based on new bindings */
|
||||
static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
|
||||
{
|
||||
struct device_node *np;
|
||||
int ret, count = 0;
|
||||
struct dev_pm_opp *opp;
|
||||
|
||||
|
|
@ -971,13 +970,12 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
|
|||
}
|
||||
|
||||
/* We have opp-table node now, iterate over it and add OPPs */
|
||||
for_each_available_child_of_node(opp_table->np, np) {
|
||||
for_each_available_child_of_node_scoped(opp_table->np, np) {
|
||||
opp = _opp_add_static_v2(opp_table, dev, np);
|
||||
if (IS_ERR(opp)) {
|
||||
ret = PTR_ERR(opp);
|
||||
dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
|
||||
ret);
|
||||
of_node_put(np);
|
||||
goto remove_static_opp;
|
||||
} else if (opp) {
|
||||
count++;
|
||||
|
|
|
|||
|
|
@ -46,7 +46,9 @@ static int hps_release(struct inode *inode, struct file *file)
|
|||
struct hps_drvdata, misc_device);
|
||||
struct device *dev = &hps->client->dev;
|
||||
|
||||
return pm_runtime_put(dev);
|
||||
pm_runtime_put(dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct file_operations hps_fops = {
|
||||
|
|
|
|||
|
|
@ -162,6 +162,7 @@ static int rapl_msr_write_raw(int cpu, struct reg_action *ra)
|
|||
|
||||
/* List of verified CPUs. */
|
||||
static const struct x86_cpu_id pl4_support_ids[] = {
|
||||
X86_MATCH_VFM(INTEL_ICELAKE_L, NULL),
|
||||
X86_MATCH_VFM(INTEL_TIGERLAKE_L, NULL),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE, NULL),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE_L, NULL),
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ static ssize_t _attr##_show(struct device *dev, \
|
|||
\
|
||||
if (power_zone->ops->get_##_attr) { \
|
||||
if (!power_zone->ops->get_##_attr(power_zone, &value)) \
|
||||
len = sprintf(buf, "%lld\n", value); \
|
||||
len = sysfs_emit(buf, "%lld\n", value); \
|
||||
} \
|
||||
\
|
||||
return len; \
|
||||
|
|
@ -75,7 +75,7 @@ static ssize_t show_constraint_##_attr(struct device *dev, \
|
|||
pconst = &power_zone->constraints[id]; \
|
||||
if (pconst && pconst->ops && pconst->ops->get_##_attr) { \
|
||||
if (!pconst->ops->get_##_attr(power_zone, id, &value)) \
|
||||
len = sprintf(buf, "%lld\n", value); \
|
||||
len = sysfs_emit(buf, "%lld\n", value); \
|
||||
} \
|
||||
\
|
||||
return len; \
|
||||
|
|
@ -171,9 +171,8 @@ static ssize_t show_constraint_name(struct device *dev,
|
|||
if (pconst && pconst->ops && pconst->ops->get_name) {
|
||||
name = pconst->ops->get_name(power_zone, id);
|
||||
if (name) {
|
||||
sprintf(buf, "%.*s\n", POWERCAP_CONSTRAINT_NAME_LEN - 1,
|
||||
name);
|
||||
len = strlen(buf);
|
||||
len = sysfs_emit(buf, "%.*s\n",
|
||||
POWERCAP_CONSTRAINT_NAME_LEN - 1, name);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -350,7 +349,7 @@ static ssize_t name_show(struct device *dev,
|
|||
{
|
||||
struct powercap_zone *power_zone = to_powercap_zone(dev);
|
||||
|
||||
return sprintf(buf, "%s\n", power_zone->name);
|
||||
return sysfs_emit(buf, "%s\n", power_zone->name);
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RO(name);
|
||||
|
|
@ -438,7 +437,7 @@ static ssize_t enabled_show(struct device *dev,
|
|||
mode = false;
|
||||
}
|
||||
|
||||
return sprintf(buf, "%d\n", mode);
|
||||
return sysfs_emit(buf, "%d\n", mode);
|
||||
}
|
||||
|
||||
static ssize_t enabled_store(struct device *dev,
|
||||
|
|
|
|||
|
|
@ -348,9 +348,9 @@ static inline int ufshcd_rpm_resume(struct ufs_hba *hba)
|
|||
return pm_runtime_resume(&hba->ufs_device_wlun->sdev_gendev);
|
||||
}
|
||||
|
||||
static inline int ufshcd_rpm_put(struct ufs_hba *hba)
|
||||
static inline void ufshcd_rpm_put(struct ufs_hba *hba)
|
||||
{
|
||||
return pm_runtime_put(&hba->ufs_device_wlun->sdev_gendev);
|
||||
pm_runtime_put(&hba->ufs_device_wlun->sdev_gendev);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -1810,13 +1810,11 @@ EXPORT_SYMBOL_GPL(usb_autopm_put_interface);
|
|||
void usb_autopm_put_interface_async(struct usb_interface *intf)
|
||||
{
|
||||
struct usb_device *udev = interface_to_usbdev(intf);
|
||||
int status;
|
||||
|
||||
usb_mark_last_busy(udev);
|
||||
status = pm_runtime_put(&intf->dev);
|
||||
dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",
|
||||
__func__, atomic_read(&intf->dev.power.usage_count),
|
||||
status);
|
||||
pm_runtime_put(&intf->dev);
|
||||
dev_vdbg(&intf->dev, "%s: cnt %d\n",
|
||||
__func__, atomic_read(&intf->dev.power.usage_count));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usb_autopm_put_interface_async);
|
||||
|
||||
|
|
|
|||
|
|
@ -132,9 +132,7 @@ static int rzg2l_wdt_stop(struct watchdog_device *wdev)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = pm_runtime_put(wdev->parent);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
pm_runtime_put(wdev->parent);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -174,9 +174,7 @@ static int rzv2h_wdt_stop(struct watchdog_device *wdev)
|
|||
if (priv->of_data->wdtdcr)
|
||||
rzt2h_wdt_wdtdcr_count_stop(priv);
|
||||
|
||||
ret = pm_runtime_put(wdev->parent);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
pm_runtime_put(wdev->parent);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -270,9 +268,7 @@ static int rzt2h_wdt_wdtdcr_init(struct platform_device *pdev,
|
|||
|
||||
rzt2h_wdt_wdtdcr_count_stop(priv);
|
||||
|
||||
ret = pm_runtime_put(&pdev->dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
pm_runtime_put(&pdev->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -154,6 +154,7 @@ extern int cppc_get_perf_ctrs(int cpu, struct cppc_perf_fb_ctrs *perf_fb_ctrs);
|
|||
extern int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls);
|
||||
extern int cppc_set_enable(int cpu, bool enable);
|
||||
extern int cppc_get_perf_caps(int cpu, struct cppc_perf_caps *caps);
|
||||
extern bool cppc_perf_ctrs_in_pcc_cpu(unsigned int cpu);
|
||||
extern bool cppc_perf_ctrs_in_pcc(void);
|
||||
extern unsigned int cppc_perf_to_khz(struct cppc_perf_caps *caps, unsigned int perf);
|
||||
extern unsigned int cppc_khz_to_perf(struct cppc_perf_caps *caps, unsigned int freq);
|
||||
|
|
@ -204,6 +205,10 @@ static inline int cppc_get_perf_caps(int cpu, struct cppc_perf_caps *caps)
|
|||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
static inline bool cppc_perf_ctrs_in_pcc_cpu(unsigned int cpu)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
static inline bool cppc_perf_ctrs_in_pcc(void)
|
||||
{
|
||||
return false;
|
||||
|
|
|
|||
|
|
@ -658,7 +658,7 @@ extern void handle_fasteoi_nmi(struct irq_desc *desc);
|
|||
|
||||
extern int irq_chip_compose_msi_msg(struct irq_data *data, struct msi_msg *msg);
|
||||
extern int irq_chip_pm_get(struct irq_data *data);
|
||||
extern int irq_chip_pm_put(struct irq_data *data);
|
||||
extern void irq_chip_pm_put(struct irq_data *data);
|
||||
#ifdef CONFIG_IRQ_DOMAIN_HIERARCHY
|
||||
extern void handle_fasteoi_ack_irq(struct irq_desc *desc);
|
||||
extern void handle_fasteoi_mask_irq(struct irq_desc *desc);
|
||||
|
|
|
|||
|
|
@ -681,10 +681,10 @@ struct dev_pm_info {
|
|||
struct list_head entry;
|
||||
struct completion completion;
|
||||
struct wakeup_source *wakeup;
|
||||
bool work_in_progress; /* Owned by the PM core */
|
||||
bool wakeup_path:1;
|
||||
bool syscore:1;
|
||||
bool no_pm_callbacks:1; /* Owned by the PM core */
|
||||
bool work_in_progress:1; /* Owned by the PM core */
|
||||
bool smart_suspend:1; /* Owned by the PM core */
|
||||
bool must_resume:1; /* Owned by the PM core */
|
||||
bool may_skip_resume:1; /* Set by subsystems */
|
||||
|
|
|
|||
|
|
@ -126,6 +126,7 @@ enum tick_dep_bits {
|
|||
|
||||
#ifdef CONFIG_NO_HZ_COMMON
|
||||
extern bool tick_nohz_enabled;
|
||||
extern bool tick_nohz_is_active(void);
|
||||
extern bool tick_nohz_tick_stopped(void);
|
||||
extern bool tick_nohz_tick_stopped_cpu(int cpu);
|
||||
extern void tick_nohz_idle_stop_tick(void);
|
||||
|
|
@ -142,6 +143,7 @@ extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
|
|||
extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time);
|
||||
#else /* !CONFIG_NO_HZ_COMMON */
|
||||
#define tick_nohz_enabled (0)
|
||||
static inline bool tick_nohz_is_active(void) { return false; }
|
||||
static inline int tick_nohz_tick_stopped(void) { return 0; }
|
||||
static inline int tick_nohz_tick_stopped_cpu(int cpu) { return 0; }
|
||||
static inline void tick_nohz_idle_stop_tick(void) { }
|
||||
|
|
|
|||
|
|
@ -974,7 +974,7 @@ __irq_do_set_handler(struct irq_desc *desc, irq_flow_handler_t handle,
|
|||
irq_state_set_disabled(desc);
|
||||
if (is_chained) {
|
||||
desc->action = NULL;
|
||||
WARN_ON(irq_chip_pm_put(irq_desc_get_irq_data(desc)));
|
||||
irq_chip_pm_put(irq_desc_get_irq_data(desc));
|
||||
}
|
||||
desc->depth = 1;
|
||||
}
|
||||
|
|
@ -1530,20 +1530,20 @@ int irq_chip_pm_get(struct irq_data *data)
|
|||
}
|
||||
|
||||
/**
|
||||
* irq_chip_pm_put - Disable power for an IRQ chip
|
||||
* irq_chip_pm_put - Drop a PM reference on an IRQ chip
|
||||
* @data: Pointer to interrupt specific data
|
||||
*
|
||||
* Disable the power to the IRQ chip referenced by the interrupt data
|
||||
* structure, belongs. Note that power will only be disabled, once this
|
||||
* function has been called for all IRQs that have called irq_chip_pm_get().
|
||||
* Drop a power management reference, acquired via irq_chip_pm_get(), on the IRQ
|
||||
* chip represented by the interrupt data structure.
|
||||
*
|
||||
* Note that this will not disable power to the IRQ chip until this function
|
||||
* has been called for all IRQs that have called irq_chip_pm_get() and it may
|
||||
* not disable power at all (if user space prevents that, for example).
|
||||
*/
|
||||
int irq_chip_pm_put(struct irq_data *data)
|
||||
void irq_chip_pm_put(struct irq_data *data)
|
||||
{
|
||||
struct device *dev = irq_get_pm_device(data);
|
||||
int retval = 0;
|
||||
|
||||
if (IS_ENABLED(CONFIG_PM) && dev)
|
||||
retval = pm_runtime_put(dev);
|
||||
|
||||
return (retval < 0) ? retval : 0;
|
||||
if (dev)
|
||||
pm_runtime_put(dev);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1125,7 +1125,7 @@ EXPORT_SYMBOL_GPL(pm_wq);
|
|||
|
||||
static int __init pm_start_workqueues(void)
|
||||
{
|
||||
pm_wq = alloc_workqueue("pm", WQ_FREEZABLE | WQ_UNBOUND, 0);
|
||||
pm_wq = alloc_workqueue("pm", WQ_UNBOUND, 0);
|
||||
if (!pm_wq)
|
||||
return -ENOMEM;
|
||||
|
||||
|
|
|
|||
|
|
@ -902,8 +902,8 @@ out_clean:
|
|||
for (thr = 0; thr < nr_threads; thr++) {
|
||||
if (data[thr].thr)
|
||||
kthread_stop(data[thr].thr);
|
||||
if (data[thr].cr)
|
||||
acomp_request_free(data[thr].cr);
|
||||
|
||||
acomp_request_free(data[thr].cr);
|
||||
|
||||
if (!IS_ERR_OR_NULL(data[thr].cc))
|
||||
crypto_free_acomp(data[thr].cc);
|
||||
|
|
@ -1502,8 +1502,8 @@ out_clean:
|
|||
for (thr = 0; thr < nr_threads; thr++) {
|
||||
if (data[thr].thr)
|
||||
kthread_stop(data[thr].thr);
|
||||
if (data[thr].cr)
|
||||
acomp_request_free(data[thr].cr);
|
||||
|
||||
acomp_request_free(data[thr].cr);
|
||||
|
||||
if (!IS_ERR_OR_NULL(data[thr].cc))
|
||||
crypto_free_acomp(data[thr].cc);
|
||||
|
|
|
|||
|
|
@ -943,7 +943,7 @@ void clock_was_set(unsigned int bases)
|
|||
cpumask_var_t mask;
|
||||
int cpu;
|
||||
|
||||
if (!hrtimer_hres_active(cpu_base) && !tick_nohz_active)
|
||||
if (!hrtimer_hres_active(cpu_base) && !tick_nohz_is_active())
|
||||
goto out_timerfd;
|
||||
|
||||
if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) {
|
||||
|
|
|
|||
|
|
@ -156,7 +156,6 @@ static inline void tick_nohz_init(void) { }
|
|||
#endif
|
||||
|
||||
#ifdef CONFIG_NO_HZ_COMMON
|
||||
extern unsigned long tick_nohz_active;
|
||||
extern void timers_update_nohz(void);
|
||||
extern u64 get_jiffies_update(unsigned long *basej);
|
||||
# ifdef CONFIG_SMP
|
||||
|
|
@ -171,7 +170,6 @@ extern void timer_expire_remote(unsigned int cpu);
|
|||
# endif
|
||||
#else /* CONFIG_NO_HZ_COMMON */
|
||||
static inline void timers_update_nohz(void) { }
|
||||
#define tick_nohz_active (0)
|
||||
#endif
|
||||
|
||||
DECLARE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases);
|
||||
|
|
|
|||
|
|
@ -693,7 +693,7 @@ void __init tick_nohz_init(void)
|
|||
* NO HZ enabled ?
|
||||
*/
|
||||
bool tick_nohz_enabled __read_mostly = true;
|
||||
unsigned long tick_nohz_active __read_mostly;
|
||||
static unsigned long tick_nohz_active __read_mostly;
|
||||
/*
|
||||
* Enable / Disable tickless mode
|
||||
*/
|
||||
|
|
@ -704,6 +704,12 @@ static int __init setup_tick_nohz(char *str)
|
|||
|
||||
__setup("nohz=", setup_tick_nohz);
|
||||
|
||||
bool tick_nohz_is_active(void)
|
||||
{
|
||||
return tick_nohz_active;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tick_nohz_is_active);
|
||||
|
||||
bool tick_nohz_tick_stopped(void)
|
||||
{
|
||||
struct tick_sched *ts = this_cpu_ptr(&tick_cpu_sched);
|
||||
|
|
|
|||
|
|
@ -281,7 +281,7 @@ DEFINE_STATIC_KEY_FALSE(timers_migration_enabled);
|
|||
|
||||
static void timers_update_migration(void)
|
||||
{
|
||||
if (sysctl_timer_migration && tick_nohz_active)
|
||||
if (sysctl_timer_migration && tick_nohz_is_active())
|
||||
static_branch_enable(&timers_migration_enabled);
|
||||
else
|
||||
static_branch_disable(&timers_migration_enabled);
|
||||
|
|
|
|||
|
|
@ -3,7 +3,8 @@
|
|||
#include <linux/cpufreq.h>
|
||||
|
||||
#ifdef CONFIG_CPU_FREQ
|
||||
void rust_helper_cpufreq_register_em_with_opp(struct cpufreq_policy *policy)
|
||||
__rust_helper void
|
||||
rust_helper_cpufreq_register_em_with_opp(struct cpufreq_policy *policy)
|
||||
{
|
||||
cpufreq_register_em_with_opp(policy);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -840,7 +840,6 @@ pub trait Driver {
|
|||
/// ```
|
||||
/// use kernel::{
|
||||
/// cpufreq,
|
||||
/// c_str,
|
||||
/// device::{Core, Device},
|
||||
/// macros::vtable,
|
||||
/// of, platform,
|
||||
|
|
@ -853,7 +852,7 @@ pub trait Driver {
|
|||
///
|
||||
/// #[vtable]
|
||||
/// impl cpufreq::Driver for SampleDriver {
|
||||
/// const NAME: &'static CStr = c_str!("cpufreq-sample");
|
||||
/// const NAME: &'static CStr = c"cpufreq-sample";
|
||||
/// const FLAGS: u16 = cpufreq::flags::NEED_INITIAL_FREQ_CHECK | cpufreq::flags::IS_COOLING_DEV;
|
||||
/// const BOOST_ENABLED: bool = true;
|
||||
///
|
||||
|
|
@ -1015,6 +1014,8 @@ impl<T: Driver> Registration<T> {
|
|||
..pin_init::zeroed()
|
||||
};
|
||||
|
||||
// Always inline to optimize out error path of `build_assert`.
|
||||
#[inline(always)]
|
||||
const fn copy_name(name: &'static CStr) -> [c_char; CPUFREQ_NAME_LEN] {
|
||||
let src = name.to_bytes_with_nul();
|
||||
let mut dst = [0; CPUFREQ_NAME_LEN];
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ use core::ops::{Deref, DerefMut};
|
|||
/// fn set_clear_cpu(ptr: *mut bindings::cpumask, set_cpu: CpuId, clear_cpu: CpuId) {
|
||||
/// // SAFETY: The `ptr` is valid for writing and remains valid for the lifetime of the
|
||||
/// // returned reference.
|
||||
/// let mask = unsafe { Cpumask::as_mut_ref(ptr) };
|
||||
/// let mask = unsafe { Cpumask::from_raw_mut(ptr) };
|
||||
///
|
||||
/// mask.set(set_cpu);
|
||||
/// mask.clear(clear_cpu);
|
||||
|
|
@ -49,13 +49,13 @@ use core::ops::{Deref, DerefMut};
|
|||
pub struct Cpumask(Opaque<bindings::cpumask>);
|
||||
|
||||
impl Cpumask {
|
||||
/// Creates a mutable reference to an existing `struct cpumask` pointer.
|
||||
/// Creates a mutable reference from an existing `struct cpumask` pointer.
|
||||
///
|
||||
/// # Safety
|
||||
///
|
||||
/// The caller must ensure that `ptr` is valid for writing and remains valid for the lifetime
|
||||
/// of the returned reference.
|
||||
pub unsafe fn as_mut_ref<'a>(ptr: *mut bindings::cpumask) -> &'a mut Self {
|
||||
pub unsafe fn from_raw_mut<'a>(ptr: *mut bindings::cpumask) -> &'a mut Self {
|
||||
// SAFETY: Guaranteed by the safety requirements of the function.
|
||||
//
|
||||
// INVARIANT: The caller ensures that `ptr` is valid for writing and remains valid for the
|
||||
|
|
@ -63,13 +63,13 @@ impl Cpumask {
|
|||
unsafe { &mut *ptr.cast() }
|
||||
}
|
||||
|
||||
/// Creates a reference to an existing `struct cpumask` pointer.
|
||||
/// Creates a reference from an existing `struct cpumask` pointer.
|
||||
///
|
||||
/// # Safety
|
||||
///
|
||||
/// The caller must ensure that `ptr` is valid for reading and remains valid for the lifetime
|
||||
/// of the returned reference.
|
||||
pub unsafe fn as_ref<'a>(ptr: *const bindings::cpumask) -> &'a Self {
|
||||
pub unsafe fn from_raw<'a>(ptr: *const bindings::cpumask) -> &'a Self {
|
||||
// SAFETY: Guaranteed by the safety requirements of the function.
|
||||
//
|
||||
// INVARIANT: The caller ensures that `ptr` is valid for reading and remains valid for the
|
||||
|
|
|
|||
|
|
@ -315,7 +315,17 @@ endif
|
|||
$(INSTALL_DATA) lib/cpuidle.h $(DESTDIR)${includedir}/cpuidle.h
|
||||
$(INSTALL_DATA) lib/powercap.h $(DESTDIR)${includedir}/powercap.h
|
||||
|
||||
install-tools: $(OUTPUT)cpupower
|
||||
# SYSTEMD=false disables installation of the systemd unit file
|
||||
SYSTEMD ?= true
|
||||
|
||||
install-systemd:
|
||||
$(INSTALL) -d $(DESTDIR)${unitdir}
|
||||
sed 's|___CDIR___|${confdir}|; s|___LDIR___|${libexecdir}|' cpupower.service.in > '$(DESTDIR)${unitdir}/cpupower.service'
|
||||
$(SETPERM_DATA) '$(DESTDIR)${unitdir}/cpupower.service'
|
||||
|
||||
INSTALL_SYSTEMD := $(if $(filter true,$(strip $(SYSTEMD))),install-systemd)
|
||||
|
||||
install-tools: $(OUTPUT)cpupower $(INSTALL_SYSTEMD)
|
||||
$(INSTALL) -d $(DESTDIR)${bindir}
|
||||
$(INSTALL_PROGRAM) $(OUTPUT)cpupower $(DESTDIR)${bindir}
|
||||
$(INSTALL) -d $(DESTDIR)${bash_completion_dir}
|
||||
|
|
@ -324,9 +334,6 @@ install-tools: $(OUTPUT)cpupower
|
|||
$(INSTALL_DATA) cpupower-service.conf '$(DESTDIR)${confdir}'
|
||||
$(INSTALL) -d $(DESTDIR)${libexecdir}
|
||||
$(INSTALL_PROGRAM) cpupower.sh '$(DESTDIR)${libexecdir}/cpupower'
|
||||
$(INSTALL) -d $(DESTDIR)${unitdir}
|
||||
sed 's|___CDIR___|${confdir}|; s|___LDIR___|${libexecdir}|' cpupower.service.in > '$(DESTDIR)${unitdir}/cpupower.service'
|
||||
$(SETPERM_DATA) '$(DESTDIR)${unitdir}/cpupower.service'
|
||||
|
||||
install-man:
|
||||
$(INSTALL_DATA) -D man/cpupower.1 $(DESTDIR)${mandir}/man1/cpupower.1
|
||||
|
|
@ -406,4 +413,4 @@ help:
|
|||
@echo ' uninstall - Remove previously installed files from the dir defined by "DESTDIR"'
|
||||
@echo ' cmdline or Makefile config block option (default: "")'
|
||||
|
||||
.PHONY: all utils libcpupower update-po create-gmo install-lib install-tools install-man install-gmo install uninstall clean help
|
||||
.PHONY: all utils libcpupower update-po create-gmo install-lib install-systemd install-tools install-man install-gmo install uninstall clean help
|
||||
|
|
|
|||
|
|
@ -150,6 +150,7 @@ unsigned long long cpuidle_state_get_one_value(unsigned int cpu,
|
|||
if (len == 0)
|
||||
return 0;
|
||||
|
||||
errno = 0;
|
||||
value = strtoull(linebuf, &endp, 0);
|
||||
|
||||
if (endp == linebuf || errno == ERANGE)
|
||||
|
|
@ -193,8 +194,7 @@ static char *cpuidle_state_get_one_string(unsigned int cpu,
|
|||
if (result == NULL)
|
||||
return NULL;
|
||||
|
||||
if (result[strlen(result) - 1] == '\n')
|
||||
result[strlen(result) - 1] = '\0';
|
||||
result[strcspn(result, "\n")] = '\0';
|
||||
|
||||
return result;
|
||||
}
|
||||
|
|
@ -366,8 +366,7 @@ static char *sysfs_cpuidle_get_one_string(enum cpuidle_string which)
|
|||
if (result == NULL)
|
||||
return NULL;
|
||||
|
||||
if (result[strlen(result) - 1] == '\n')
|
||||
result[strlen(result) - 1] = '\0';
|
||||
result[strcspn(result, "\n")] = '\0';
|
||||
|
||||
return result;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -270,7 +270,7 @@ static int get_freq_hardware(unsigned int cpu, unsigned int human)
|
|||
{
|
||||
unsigned long freq;
|
||||
|
||||
if (cpupower_cpu_info.caps & CPUPOWER_CAP_APERF)
|
||||
if (!(cpupower_cpu_info.caps & CPUPOWER_CAP_APERF))
|
||||
return -EINVAL;
|
||||
|
||||
freq = cpufreq_get_freq_hardware(cpu);
|
||||
|
|
|
|||
|
|
@ -111,7 +111,7 @@ static void proc_cpuidle_cpu_output(unsigned int cpu)
|
|||
printf(_("max_cstate: C%u\n"), cstates-1);
|
||||
printf(_("maximum allowed latency: %lu usec\n"), max_allowed_cstate);
|
||||
printf(_("states:\t\n"));
|
||||
for (cstate = 1; cstate < cstates; cstate++) {
|
||||
for (cstate = 0; cstate < cstates; cstate++) {
|
||||
printf(_(" C%d: "
|
||||
"type[C%d] "), cstate, cstate);
|
||||
printf(_("promotion[--] demotion[--] "));
|
||||
|
|
|
|||
|
|
@ -70,7 +70,7 @@ static int cpuidle_stop(void)
|
|||
current_count[cpu][state] =
|
||||
cpuidle_state_time(cpu, state);
|
||||
dprint("CPU %d - State: %d - Val: %llu\n",
|
||||
cpu, state, previous_count[cpu][state]);
|
||||
cpu, state, current_count[cpu][state]);
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue