Reverting below patches from android-4.19-stable.125

15207c29 ANDROID: add ion_stat tracepoint to common kernel
82d4e59 Revert "UPSTREAM: mm, page_alloc: spread allocations across zones before introducing fragmentation"
5e86f20 Revert "UPSTREAM: mm: use alloc_flags to record if kswapd can wake"
fbc355c Revert "BACKPORT: mm: move zone watermark accesses behind an accessor"
35be952 Revert "BACKPORT: mm: reclaim small amounts of memory when an external fragmentation event occurs"
776eba0 Revert "BACKPORT: mm, compaction: be selective about what pageblocks to clear skip hints
ca46612 ANDROID: GKI: USB: add Android ABI padding to some structures
a10905b ANDROID: GKI: usb: core: Add support to parse config summary capability descriptors
77379e5 ANDROID: GKI: USB: pd: Extcon fix for C current
27ac613 ANDROID: GKI: usb: Add support to handle USB SMMU S1 address
52b86b2 ANDROID: GKI: usb: Add helper APIs to return xhci phys addresses
bf3e2f7 ANDROID: GKI: usb: phy: Fix ABI diff for usb_otg_state
a34e269 ANDROID: GKI: usb: phy: Fix ABI diff due to usb_phy.drive_dp_pulse
7edd303 ANDROID: GKI: usb: phy: Fix ABI diff for usb_phy_type and usb_phy.reset
639d56b ANDROID: GKI: usb: hcd: Add USB atomic notifier callback for HC died error
5b6a535 ANDROID: GKI: USB: Fix ABI diff for struct usb_bus
a9c813f ANDROID: usb: gadget: Add missing inline qualifier to stub functions
632093e ANDROID: GKI: USB: Resolve ABI diff for usb_gadget and usb_gadget_ops
f7fbc94 ANDROID: GKI: usb: Add helper API to issue stop endpoint command
5dfdaa1 ANDROID: GKI: usb: xhci: Add support for secondary interrupters
a6c834c ANDROID: GKI: usb: host: xhci: Add support for usb core indexing
c89d039 ANDROID: GKI: sched: add Android ABI padding to some structures
b14ffb0 ANDROID: GKI: sched.h: add Android ABI padding to some structures
c830822 ANDROID: GKI: sched: struct fields for Per-Sched-domain over utilization
eead514 ANDROID: GKI: sched: stub sched_isolate symbols
36e1278 ANDROID: GKI: sched: add task boost vendor fields to task_struct
3bc16e4 ANDROID: GKI: power_supply: Add FG_TYPE power-supply property
4211b82 ANDROID: GKI: power_supply: Add PROP_MOISTURE_DETECTION_ENABLED
0134f3d ANDROID: GKI: power: supply: format regression
66e2580 ANDROID: Incremental fs: wake up log pollers less often
c92446c ANDROID: Incremental fs: Fix scheduling while atomic error
adb33b8 ANDROID: Incremental fs: Avoid continually recalculating hashes
a8629be ANDROID: Incremental fs: Fix issues with very large files
c7c8c61 ANDROID: Incremental fs: Add setattr call
9d7386a ANDROID: Incremental fs: Use simple compression in log buffer
298fe8e ANDROID: Incremental fs: Fix create_file performance
580b23c ANDROID: Incremental fs: Fix compound page usercopy crash
06a024d ANDROID: Incremental fs: Clean up incfs_test build process
5e6feac ANDROID: Incremental fs: make remount log buffer change atomic
5128381 ANDROID: Incremental fs: Optimize get_filled_block
23f5b7c ANDROID: Incremental fs: Fix mislabeled __user ptrs
114b043 ANDROID: Incremental fs: Use 64-bit int for file_size when writing hash blocks
ae41ea9 ANDROID: Incremental fs: Fix remount
e251cfe ANDROID: Incremental fs: Protect get_fill_block, and add a field
759d52e ANDROID: Incremental fs: Fix crash polling 0 size read_log
21b6d8c ANDROID: Incremental fs: get_filled_blocks: better index_out
0389387 net: qualcomm: rmnet: Allow configuration updates to existing devices
95b8a4b ANDROID: GKI: regulator: core: Add support for regulator providers with sync state
653a867 ANDROID: GKI: regulator: Add proxy consumer driver
6efb91e ANDROID: power_supply: Add RTX power-supply property
e90672c ANDROID: GKI: power: supply: Add POWER_SUPPLY_PROP_CHARGE_DISABLE
d1e3253 usb: dwc3: gadget: Properly set maxpacket limit
074e6e4 usb: dwc3: gadget: Do link recovery for SS and SSP
dbee0a8 usb: dwc3: gadget: Fix request completion check
c521b70 usb: dwc3: gadget: Don't clear flags before transfer ended
d1eded7 usb: dwc3: gadget: don't enable interrupt when disabling endpoint
831494c usb: dwc3: core: add support for disabling SS instances in park mode
b0434aa usb: dwc3: don't set gadget->is_otg flag
fe60e0d usb: dwc3: gadget: Wrap around when skip TRBs
b71b0c1 ANDROID: GKI: drivers: thermal: cpu_cooling: Use CPU ID as cooling device ID
eaa42a5 ANDROID: GKI: drivers: of-thermal: Relate thermal zones using same sensor
10d3954 ANDROID: GKI: drivers: thermal: Add support for getting trip temperature
5a7902f ANDROID: GKI: Add functions of_thermal_handle_trip/of_thermal_handle_trip_temp
b078dd6 ANDROID: GKI: drivers: thermal: Add post suspend evaluate flag to thermal zone devicetree
e92e403 ANDROID: GKI: drivers: cpu_cooling: allow platform freq mitigation

Change-Id: Id2a4fb08f24ad83c880fa8e4c199ea90837649b5
Signed-off-by: Srinivasarao P <spathi@codeaurora.org>
This commit is contained in:
Srinivasarao P 2020-06-12 16:00:02 +05:30
parent 11e756dea9
commit e79f1dc4d1
68 changed files with 894 additions and 2682 deletions

View file

@ -1,32 +0,0 @@
Regulator Proxy Consumer Bindings
Regulator proxy consumers provide a means to use a default regulator state
during bootup only which is removed at the end of boot. This feature can be
used in situations where a shared regulator can be scaled between several
possible voltages and hardware requires that it be at a high level at the
beginning of boot before the consumer device responsible for requesting the
high level has probed.
Optional properties:
proxy-supply: phandle of the regulator's own device node.
This property is required if any of the three
properties below are specified.
qcom,proxy-consumer-enable: Boolean indicating that the regulator must be
kept enabled during boot.
qcom,proxy-consumer-voltage: List of two integers corresponding the minimum
and maximum voltage allowed during boot in
microvolts.
qcom,proxy-consumer-current: Minimum current in microamps required during
boot.
Example:
foo_vreg: regulator@0 {
regulator-name = "foo";
regulator-min-microvolt = <1000000>;
regulator-max-microvolt = <2000000>;
proxy-supply = <&foo_vreg>;
qcom,proxy-consumer-voltage = <1500000 2000000>;
qcom,proxy-consumer-current = <25000>;
qcom,proxy-consumer-enable;
};

View file

@ -169,9 +169,6 @@ Optional property:
Type: bool thresholds, so the governors may mitigate by ensuring
timing closures and other low temperature operating
issues.
- wake-capable-sensor: Set to true if thermal zone sensor is wake up capable
Type: bool and cooling devices binded to this thermal zone are not
Size: none affected during suspend.
Note: The delay properties are bound to the maximum dT/dt (temperature
derivative over time) in two situations for a thermal zone:

View file

@ -64,6 +64,7 @@ Currently, these files are in /proc/sys/vm:
- swappiness
- user_reserve_kbytes
- vfs_cache_pressure
- watermark_boost_factor
- watermark_scale_factor
- zone_reclaim_mode
@ -872,6 +873,26 @@ ten times more freeable objects than there are.
=============================================================
watermark_boost_factor:
This factor controls the level of reclaim when memory is being fragmented.
It defines the percentage of the high watermark of a zone that will be
reclaimed if pages of different mobility are being mixed within pageblocks.
The intent is that compaction has less work to do in the future and to
increase the success rate of future high-order allocations such as SLUB
allocations, THP and hugetlbfs pages.
To make it sensible with respect to the watermark_scale_factor parameter,
the unit is in fractions of 10,000. The default value of 15,000 means
that up to 150% of the high watermark will be reclaimed in the event of
a pageblock being mixed due to fragmentation. The level of reclaim is
determined by the number of fragmentation events that occurred in the
recent past. If this value is smaller than a pageblock then a pageblocks
worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
of 0 will disable the feature.
=============================================================
watermark_scale_factor:
This factor controls the aggressiveness of kswapd. It defines the

View file

@ -288,6 +288,7 @@ static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[],
{
struct rmnet_priv *priv = netdev_priv(dev);
struct net_device *real_dev;
struct rmnet_endpoint *ep;
struct rmnet_port *port;
u16 mux_id;
@ -302,27 +303,19 @@ static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[],
if (data[IFLA_RMNET_MUX_ID]) {
mux_id = nla_get_u16(data[IFLA_RMNET_MUX_ID]);
if (mux_id != priv->mux_id) {
struct rmnet_endpoint *ep;
ep = rmnet_get_endpoint(port, priv->mux_id);
if (!ep)
return -ENODEV;
if (rmnet_get_endpoint(port, mux_id)) {
NL_SET_ERR_MSG_MOD(extack,
"MUX ID already exists");
return -EINVAL;
}
hlist_del_init_rcu(&ep->hlnode);
hlist_add_head_rcu(&ep->hlnode,
&port->muxed_ep[mux_id]);
ep->mux_id = mux_id;
priv->mux_id = mux_id;
if (rmnet_get_endpoint(port, mux_id)) {
NL_SET_ERR_MSG_MOD(extack, "MUX ID already exists");
return -EINVAL;
}
ep = rmnet_get_endpoint(port, priv->mux_id);
if (!ep)
return -ENODEV;
hlist_del_init_rcu(&ep->hlnode);
hlist_add_head_rcu(&ep->hlnode, &port->muxed_ep[mux_id]);
ep->mux_id = mux_id;
priv->mux_id = mux_id;
}
if (data[IFLA_RMNET_FLAGS]) {

View file

@ -469,7 +469,6 @@ static struct device_attribute power_supply_attrs[] = {
POWER_SUPPLY_ATTR(comp_clamp_level),
POWER_SUPPLY_ATTR(adapter_cc_mode),
POWER_SUPPLY_ATTR(skin_health),
POWER_SUPPLY_ATTR(charge_disable),
POWER_SUPPLY_ATTR(adapter_details),
POWER_SUPPLY_ATTR(dead_battery),
POWER_SUPPLY_ATTR(voltage_fifo),
@ -477,7 +476,6 @@ static struct device_attribute power_supply_attrs[] = {
POWER_SUPPLY_ATTR(operating_freq),
POWER_SUPPLY_ATTR(aicl_delay),
POWER_SUPPLY_ATTR(aicl_icl),
POWER_SUPPLY_ATTR(rtx),
POWER_SUPPLY_ATTR(cutoff_soc),
POWER_SUPPLY_ATTR(sys_soc),
POWER_SUPPLY_ATTR(batt_soc),
@ -506,8 +504,6 @@ static struct device_attribute power_supply_attrs[] = {
POWER_SUPPLY_ATTR(irq_status),
POWER_SUPPLY_ATTR(parallel_output_mode),
POWER_SUPPLY_ATTR(alignment),
POWER_SUPPLY_ATTR(moisture_detection_enabled),
POWER_SUPPLY_ATTR(fg_type),
/* Local extensions of type int64_t */
POWER_SUPPLY_ATTR(charge_counter_ext),
POWER_SUPPLY_ATTR(charge_charger_state),
@ -613,12 +609,6 @@ int power_supply_uevent(struct device *dev, struct kobj_uevent_env *env)
attr = &power_supply_attrs[psy->desc->properties[j]];
if (!attr->attr.name) {
dev_info(dev, "%s:%d FAKE attr.name=NULL skip\n",
__FILE__, __LINE__);
continue;
}
ret = power_supply_show_property(dev, attr, prop_buf);
if (ret == -ENODEV || ret == -ENODATA) {
/* When a battery is absent, we expect -ENODEV. Don't abort;

View file

@ -54,16 +54,6 @@ config REGULATOR_USERSPACE_CONSUMER
If unsure, say no.
config REGULATOR_PROXY_CONSUMER
bool "Boot time regulator proxy consumer support"
help
This driver provides support for boot time regulator proxy requests.
It can enforce a specified voltage range, set a minimum current,
and/or keep a regulator enabled. It is needed in circumstances where
reducing one or more of these three quantities will cause hardware to
stop working if performed before the driver managing the hardware has
probed.
config REGULATOR_88PG86X
tristate "Marvell 88PG86X voltage regulators"
depends on I2C

View file

@ -9,7 +9,6 @@ obj-$(CONFIG_OF) += of_regulator.o
obj-$(CONFIG_REGULATOR_FIXED_VOLTAGE) += fixed.o
obj-$(CONFIG_REGULATOR_VIRTUAL_CONSUMER) += virtual.o
obj-$(CONFIG_REGULATOR_USERSPACE_CONSUMER) += userspace-consumer.o
obj-$(CONFIG_REGULATOR_PROXY_CONSUMER) += proxy-consumer.o
obj-$(CONFIG_REGULATOR_88PG86X) += 88pg86x.o
obj-$(CONFIG_REGULATOR_88PM800) += 88pm800-regulator.o

View file

@ -4490,30 +4490,6 @@ void regulator_unregister(struct regulator_dev *rdev)
}
EXPORT_SYMBOL_GPL(regulator_unregister);
static int regulator_sync_supply(struct device *dev, void *data)
{
struct regulator_dev *rdev = dev_to_rdev(dev);
if (rdev->dev.parent != data)
return 0;
if (!rdev->proxy_consumer)
return 0;
dev_dbg(data, "Removing regulator proxy consumer requests\n");
regulator_proxy_consumer_unregister(rdev->proxy_consumer);
rdev->proxy_consumer = NULL;
return 0;
}
void regulator_sync_state(struct device *dev)
{
class_for_each_device(&regulator_class, NULL, dev,
regulator_sync_supply);
}
EXPORT_SYMBOL_GPL(regulator_sync_state);
#ifdef CONFIG_SUSPEND
static int _regulator_suspend(struct device *dev, void *data)
{

View file

@ -1,224 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2013-2014, 2016, The Linux Foundation. All rights reserved.
*/
#define pr_fmt(fmt) "%s: " fmt, __func__
#include <linux/bitops.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/of.h>
#include <linux/slab.h>
#include <linux/regulator/consumer.h>
#include <linux/regulator/proxy-consumer.h>
struct proxy_consumer {
struct list_head list;
struct regulator *reg;
bool enable;
int min_uV;
int max_uV;
u32 current_uA;
};
static DEFINE_MUTEX(proxy_consumer_list_mutex);
static LIST_HEAD(proxy_consumer_list);
static bool proxy_consumers_removed;
/**
* regulator_proxy_consumer_register() - conditionally register a proxy consumer
* for the specified regulator and set its boot time parameters
* @reg_dev: Device pointer of the regulator
* @reg_node: Device node pointer of the regulator
*
* Returns a struct proxy_consumer pointer corresponding to the regulator on
* success, ERR_PTR() if an error occurred, or NULL if no proxy consumer is
* needed for the regulator. This function calls
* regulator_get(reg_dev, "proxy") after first checking if any proxy consumer
* properties are present in the reg_node device node. After that, the voltage,
* minimum current, and/or the enable state will be set based upon the device
* node property values.
*/
struct proxy_consumer *regulator_proxy_consumer_register(struct device *reg_dev,
struct device_node *reg_node)
{
struct proxy_consumer *consumer = NULL;
const char *reg_name = "";
u32 voltage[2] = {0};
int rc;
bool no_sync_state = !reg_dev->driver->sync_state;
/* Return immediately if no proxy consumer properties are specified. */
if (!of_find_property(reg_node, "qcom,proxy-consumer-enable", NULL)
&& !of_find_property(reg_node, "qcom,proxy-consumer-voltage", NULL)
&& !of_find_property(reg_node, "qcom,proxy-consumer-current", NULL))
return NULL;
mutex_lock(&proxy_consumer_list_mutex);
/* Do not register new consumers if they cannot be removed later. */
if (proxy_consumers_removed && no_sync_state) {
rc = -EPERM;
goto unlock;
}
if (dev_name(reg_dev))
reg_name = dev_name(reg_dev);
consumer = kzalloc(sizeof(*consumer), GFP_KERNEL);
if (!consumer) {
rc = -ENOMEM;
goto unlock;
}
INIT_LIST_HEAD(&consumer->list);
consumer->enable
= of_property_read_bool(reg_node, "qcom,proxy-consumer-enable");
of_property_read_u32(reg_node, "qcom,proxy-consumer-current",
&consumer->current_uA);
rc = of_property_read_u32_array(reg_node, "qcom,proxy-consumer-voltage",
voltage, 2);
if (!rc) {
consumer->min_uV = voltage[0];
consumer->max_uV = voltage[1];
}
dev_dbg(reg_dev, "proxy consumer request: enable=%d, voltage_range=[%d, %d] uV, min_current=%d uA\n",
consumer->enable, consumer->min_uV, consumer->max_uV,
consumer->current_uA);
consumer->reg = regulator_get(reg_dev, "proxy");
if (IS_ERR_OR_NULL(consumer->reg)) {
rc = PTR_ERR(consumer->reg);
pr_err("regulator_get() failed for %s, rc=%d\n", reg_name, rc);
goto unlock;
}
if (consumer->max_uV > 0 && consumer->min_uV <= consumer->max_uV) {
rc = regulator_set_voltage(consumer->reg, consumer->min_uV,
consumer->max_uV);
if (rc) {
pr_err("regulator_set_voltage %s failed, rc=%d\n",
reg_name, rc);
goto free_regulator;
}
}
if (consumer->current_uA > 0) {
rc = regulator_set_load(consumer->reg,
consumer->current_uA);
if (rc < 0) {
pr_err("regulator_set_load %s failed, rc=%d\n",
reg_name, rc);
goto remove_voltage;
}
}
if (consumer->enable) {
rc = regulator_enable(consumer->reg);
if (rc) {
pr_err("regulator_enable %s failed, rc=%d\n", reg_name,
rc);
goto remove_current;
}
}
if (no_sync_state)
list_add(&consumer->list, &proxy_consumer_list);
mutex_unlock(&proxy_consumer_list_mutex);
return consumer;
remove_current:
regulator_set_load(consumer->reg, 0);
remove_voltage:
regulator_set_voltage(consumer->reg, 0, INT_MAX);
free_regulator:
regulator_put(consumer->reg);
unlock:
kfree(consumer);
mutex_unlock(&proxy_consumer_list_mutex);
return ERR_PTR(rc);
}
/* proxy_consumer_list_mutex must be held by caller. */
static int regulator_proxy_consumer_remove(struct proxy_consumer *consumer)
{
int rc = 0;
if (consumer->enable) {
rc = regulator_disable(consumer->reg);
if (rc)
pr_err("regulator_disable failed, rc=%d\n", rc);
}
if (consumer->current_uA > 0) {
rc = regulator_set_load(consumer->reg, 0);
if (rc < 0)
pr_err("regulator_set_load failed, rc=%d\n",
rc);
}
if (consumer->max_uV > 0 && consumer->min_uV <= consumer->max_uV) {
rc = regulator_set_voltage(consumer->reg, 0, INT_MAX);
if (rc)
pr_err("regulator_set_voltage failed, rc=%d\n", rc);
}
regulator_put(consumer->reg);
list_del(&consumer->list);
kfree(consumer);
return rc;
}
/**
* regulator_proxy_consumer_unregister() - unregister a proxy consumer and
* remove its boot time requests
* @consumer: Pointer to proxy_consumer to be removed
*
* Returns 0 on success or errno on failure. This function removes all requests
* made by the proxy consumer in regulator_proxy_consumer_register() and then
* frees the consumer's resources.
*/
int regulator_proxy_consumer_unregister(struct proxy_consumer *consumer)
{
int rc = 0;
if (IS_ERR_OR_NULL(consumer))
return 0;
mutex_lock(&proxy_consumer_list_mutex);
rc = regulator_proxy_consumer_remove(consumer);
mutex_unlock(&proxy_consumer_list_mutex);
return rc;
}
/*
* Remove all proxy requests at late_initcall_sync. The assumption is that all
* devices have probed at this point and made their own regulator requests.
*/
static int __init regulator_proxy_consumer_remove_all(void)
{
struct proxy_consumer *consumer;
struct proxy_consumer *temp;
mutex_lock(&proxy_consumer_list_mutex);
proxy_consumers_removed = true;
if (!list_empty(&proxy_consumer_list))
pr_info("removing legacy regulator proxy consumer requests\n");
list_for_each_entry_safe(consumer, temp, &proxy_consumer_list, list) {
regulator_proxy_consumer_remove(consumer);
}
mutex_unlock(&proxy_consumer_list_mutex);
return 0;
}
late_initcall_sync(regulator_proxy_consumer_remove_all);

View file

@ -1,6 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_ION) += ion-alloc.o
CFLAGS_ion.o = -I$(src)
ion-alloc-objs += ion.o ion-ioctl.o ion_heap.o
ion-alloc-$(CONFIG_ION_SYSTEM_HEAP) += ion_system_heap.o ion_page_pool.o
ion-alloc-$(CONFIG_ION_CARVEOUT_HEAP) += ion_carveout_heap.o

View file

@ -29,8 +29,6 @@
#include <linux/uaccess.h>
#include <linux/vmalloc.h>
#define CREATE_TRACE_POINTS
#include "ion_trace.h"
#include "ion.h"
static struct ion_device *internal_dev;
@ -63,20 +61,6 @@ static void ion_buffer_add(struct ion_device *dev,
rb_insert_color(&buffer->node, &dev->buffers);
}
static void track_buffer_created(struct ion_buffer *buffer)
{
long total = atomic_long_add_return(buffer->size, &total_heap_bytes);
trace_ion_stat(buffer->sg_table, buffer->size, total);
}
static void track_buffer_destroyed(struct ion_buffer *buffer)
{
long total = atomic_long_sub_return(buffer->size, &total_heap_bytes);
trace_ion_stat(buffer->sg_table, -buffer->size, total);
}
/* this function should only be called while dev->lock is held */
static struct ion_buffer *ion_buffer_create(struct ion_heap *heap,
struct ion_device *dev,
@ -118,7 +102,7 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap,
mutex_lock(&dev->buffer_lock);
ion_buffer_add(dev, buffer);
mutex_unlock(&dev->buffer_lock);
track_buffer_created(buffer);
atomic_long_add(len, &total_heap_bytes);
return buffer;
err1:
@ -147,7 +131,7 @@ static void _ion_buffer_destroy(struct ion_buffer *buffer)
mutex_lock(&dev->buffer_lock);
rb_erase(&buffer->node, &dev->buffers);
mutex_unlock(&dev->buffer_lock);
track_buffer_destroyed(buffer);
atomic_long_sub(buffer->size, &total_heap_bytes);
if (heap->flags & ION_HEAP_FLAG_DEFER_FREE)
ion_heap_freelist_add(heap, buffer);

View file

@ -1,55 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* drivers/staging/android/ion/ion-trace.h
*
* Copyright (C) 2020 Google, Inc.
*/
#undef TRACE_SYSTEM
#define TRACE_SYSTEM ion
#if !defined(_ION_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
#define _ION_TRACE_H
#include <linux/tracepoint.h>
#ifndef __ION_PTR_TO_HASHVAL
static unsigned int __ion_ptr_to_hash(const void *ptr)
{
unsigned long hashval;
if (ptr_to_hashval(ptr, &hashval))
return 0;
/* The hashed value is only 32-bit */
return (unsigned int)hashval;
}
#define __ION_PTR_TO_HASHVAL
#endif
TRACE_EVENT(ion_stat,
TP_PROTO(const void *addr, long len,
unsigned long total_allocated),
TP_ARGS(addr, len, total_allocated),
TP_STRUCT__entry(__field(unsigned int, buffer_id)
__field(long, len)
__field(unsigned long, total_allocated)
),
TP_fast_assign(__entry->buffer_id = __ion_ptr_to_hash(addr);
__entry->len = len;
__entry->total_allocated = total_allocated;
),
TP_printk("buffer_id=%u len=%ldB total_allocated=%ldB",
__entry->buffer_id,
__entry->len,
__entry->total_allocated)
);
#endif /* _ION_TRACE_H */
/* This part must be outside protection */
#undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH .
#define TRACE_INCLUDE_FILE ion_trace
#include <trace/define_trace.h>

View file

@ -26,12 +26,12 @@
#include <linux/thermal.h>
#include <linux/cpufreq.h>
#include <linux/err.h>
#include <linux/idr.h>
#include <linux/pm_opp.h>
#include <linux/slab.h>
#include <linux/cpu.h>
#include <linux/cpu_cooling.h>
#include <linux/energy_model.h>
#include <linux/of_device.h>
#include <trace/events/thermal.h>
@ -91,9 +91,9 @@ struct cpufreq_cooling_device {
struct cpufreq_policy *policy;
struct list_head node;
struct time_in_idle *idle_time;
struct cpu_cooling_ops *plat_ops;
};
static DEFINE_IDA(cpufreq_ida);
static DEFINE_MUTEX(cooling_list_lock);
static LIST_HEAD(cpufreq_cdev_list);
@ -342,16 +342,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
cpufreq_cdev->cpufreq_state = state;
cpufreq_cdev->clipped_freq = clip_freq;
/* Check if the device has a platform mitigation function that
* can handle the CPU freq mitigation, if not, notify cpufreq
* framework.
*/
if (cpufreq_cdev->plat_ops &&
cpufreq_cdev->plat_ops->ceil_limit)
cpufreq_cdev->plat_ops->ceil_limit(cpufreq_cdev->policy->cpu,
clip_freq);
else
cpufreq_update_policy(cpufreq_cdev->policy->cpu);
cpufreq_update_policy(cpufreq_cdev->policy->cpu);
return 0;
}
@ -533,9 +524,6 @@ static struct notifier_block thermal_cpufreq_notifier_block = {
* @policy: cpufreq policy
* Normally this should be same as cpufreq policy->related_cpus.
* @try_model: true if a power model should be used
* @plat_mitig_func: function that does the mitigation by changing the
* frequencies (Optional). By default, cpufreq framweork will
* be notified of the new limits.
*
* This interface function registers the cpufreq cooling device with the name
* "thermal-cpufreq-%x". This api can support multiple instances of cpufreq
@ -547,13 +535,13 @@ static struct notifier_block thermal_cpufreq_notifier_block = {
*/
static struct thermal_cooling_device *
__cpufreq_cooling_register(struct device_node *np,
struct cpufreq_policy *policy, bool try_model,
struct cpu_cooling_ops *plat_ops)
struct cpufreq_policy *policy, bool try_model)
{
struct thermal_cooling_device *cdev;
struct cpufreq_cooling_device *cpufreq_cdev;
char dev_name[THERMAL_NAME_LENGTH];
unsigned int i, num_cpus;
int ret;
struct thermal_cooling_device_ops *cooling_ops;
bool first;
@ -601,16 +589,20 @@ __cpufreq_cooling_register(struct device_node *np,
#endif
cooling_ops = &cpufreq_cooling_ops;
cpufreq_cdev->id = policy->cpu;
ret = ida_simple_get(&cpufreq_ida, 0, 0, GFP_KERNEL);
if (ret < 0) {
cdev = ERR_PTR(ret);
goto free_idle_time;
}
cpufreq_cdev->id = ret;
snprintf(dev_name, sizeof(dev_name), "thermal-cpufreq-%d",
cpufreq_cdev->id);
cpufreq_cdev->plat_ops = plat_ops;
cdev = thermal_of_cooling_device_register(np, dev_name, cpufreq_cdev,
cooling_ops);
if (IS_ERR(cdev))
goto free_idle_time;
goto remove_ida;
cpufreq_cdev->clipped_freq = get_state_freq(cpufreq_cdev, 0);
cpufreq_cdev->cdev = cdev;
@ -627,6 +619,8 @@ __cpufreq_cooling_register(struct device_node *np,
return cdev;
remove_ida:
ida_simple_remove(&cpufreq_ida, cpufreq_cdev->id);
free_idle_time:
kfree(cpufreq_cdev->idle_time);
free_cdev:
@ -648,7 +642,7 @@ free_cdev:
struct thermal_cooling_device *
cpufreq_cooling_register(struct cpufreq_policy *policy)
{
return __cpufreq_cooling_register(NULL, policy, false, NULL);
return __cpufreq_cooling_register(NULL, policy, false);
}
EXPORT_SYMBOL_GPL(cpufreq_cooling_register);
@ -684,7 +678,7 @@ of_cpufreq_cooling_register(struct cpufreq_policy *policy)
}
if (of_find_property(np, "#cooling-cells", NULL)) {
cdev = __cpufreq_cooling_register(np, policy, true, NULL);
cdev = __cpufreq_cooling_register(np, policy, true);
if (IS_ERR(cdev)) {
pr_err("cpu_cooling: cpu%d is not running as cooling device: %ld\n",
policy->cpu, PTR_ERR(cdev));
@ -697,37 +691,6 @@ of_cpufreq_cooling_register(struct cpufreq_policy *policy)
}
EXPORT_SYMBOL_GPL(of_cpufreq_cooling_register);
/**
* cpufreq_platform_cooling_register() - create cpufreq cooling device with
* additional platform specific mitigation function.
*
* @clip_cpus: cpumask of cpus where the frequency constraints will happen
* @plat_ops: the platform mitigation functions that will be called insted of
* cpufreq, if provided.
*
* Return: a valid struct thermal_cooling_device pointer on success,
* on failure, it returns a corresponding ERR_PTR().
*/
struct thermal_cooling_device *
cpufreq_platform_cooling_register(struct cpufreq_policy *policy,
struct cpu_cooling_ops *plat_ops)
{
struct device_node *cpu_node;
struct thermal_cooling_device *cdev = NULL;
cpu_node = of_cpu_device_node_get(policy->cpu);
if (!cpu_node) {
pr_err("No cpu node\n");
return ERR_PTR(-EINVAL);
}
cdev = __cpufreq_cooling_register(cpu_node, policy, false,
plat_ops);
of_node_put(cpu_node);
return cdev;
}
EXPORT_SYMBOL(cpufreq_platform_cooling_register);
/**
* cpufreq_cooling_unregister - function to remove cpufreq cooling device.
* @cdev: thermal cooling device pointer.
@ -755,6 +718,7 @@ void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
CPUFREQ_POLICY_NOTIFIER);
thermal_cooling_device_unregister(cpufreq_cdev->cdev);
ida_simple_remove(&cpufreq_ida, cpufreq_cdev->id);
kfree(cpufreq_cdev->idle_time);
kfree(cpufreq_cdev);
}

View file

@ -64,7 +64,6 @@ struct __sensor_param {
* @slope: slope of the temperature adjustment curve
* @offset: offset of the temperature adjustment curve
* @default_disable: Keep the thermal zone disabled by default
* @is_wakeable: Ignore post suspend thermal zone re-evaluation
* @tzd: thermal zone device pointer for this sensor
* @ntrips: number of trip points
* @trips: an array of trip points (0..ntrips - 1)
@ -82,7 +81,6 @@ struct __thermal_zone {
int offset;
struct thermal_zone_device *tzd;
bool default_disable;
bool is_wakeable;
/* trip data */
int ntrips;
@ -407,63 +405,11 @@ static int of_thermal_get_trip_temp(struct thermal_zone_device *tz, int trip,
if (trip >= data->ntrips || trip < 0)
return -EDOM;
if (data->senps && data->senps->ops &&
data->senps->ops->get_trip_temp) {
int ret;
ret = data->senps->ops->get_trip_temp(data->senps->sensor_data,
trip, temp);
if (ret)
return ret;
} else {
*temp = data->trips[trip].temperature;
}
*temp = data->trips[trip].temperature;
return 0;
}
static bool of_thermal_is_trips_triggered(struct thermal_zone_device *tz,
int temp)
{
int tt, th, trip, last_temp;
struct __thermal_zone *data = tz->devdata;
bool triggered = false;
if (!tz->tzp)
return triggered;
mutex_lock(&tz->lock);
last_temp = tz->temperature;
for (trip = 0; trip < data->ntrips; trip++) {
if (!tz->tzp->tracks_low) {
tt = data->trips[trip].temperature;
if (temp >= tt && last_temp < tt) {
triggered = true;
break;
}
th = tt - data->trips[trip].hysteresis;
if (temp <= th && last_temp > th) {
triggered = true;
break;
}
} else {
tt = data->trips[trip].temperature;
if (temp <= tt && last_temp > tt) {
triggered = true;
break;
}
th = tt + data->trips[trip].hysteresis;
if (temp >= th && last_temp < th) {
triggered = true;
break;
}
}
}
mutex_unlock(&tz->lock);
return triggered;
}
static int of_thermal_set_trip_temp(struct thermal_zone_device *tz, int trip,
int temp)
{
@ -529,70 +475,6 @@ static int of_thermal_get_crit_temp(struct thermal_zone_device *tz,
return -EINVAL;
}
static bool of_thermal_is_wakeable(struct thermal_zone_device *tz)
{
struct __thermal_zone *data = tz->devdata;
return data->is_wakeable;
}
static void handle_thermal_trip(struct thermal_zone_device *tz,
bool temp_valid, int trip_temp)
{
struct thermal_zone_device *zone;
struct __thermal_zone *data;
struct list_head *head;
if (!tz || !tz->devdata)
return;
data = tz->devdata;
if (!data->senps)
return;
head = &data->senps->first_tz;
list_for_each_entry(data, head, list) {
zone = data->tzd;
if (data->mode == THERMAL_DEVICE_DISABLED)
continue;
if (!temp_valid) {
thermal_zone_device_update(zone,
THERMAL_EVENT_UNSPECIFIED);
} else {
if (!of_thermal_is_trips_triggered(zone, trip_temp))
continue;
thermal_zone_device_update_temp(zone,
THERMAL_EVENT_UNSPECIFIED, trip_temp);
}
}
}
/*
* of_thermal_handle_trip_temp - Handle thermal trip from sensors
*
* @tz: pointer to the primary thermal zone.
* @trip_temp: The temperature
*/
void of_thermal_handle_trip_temp(struct thermal_zone_device *tz,
int trip_temp)
{
return handle_thermal_trip(tz, true, trip_temp);
}
EXPORT_SYMBOL_GPL(of_thermal_handle_trip_temp);
/*
* of_thermal_handle_trip - Handle thermal trip from sensors
*
* @tz: pointer to the primary thermal zone.
*/
void of_thermal_handle_trip(struct thermal_zone_device *tz)
{
return handle_thermal_trip(tz, false, 0);
}
EXPORT_SYMBOL_GPL(of_thermal_handle_trip);
static struct thermal_zone_device_ops of_thermal_ops = {
.get_mode = of_thermal_get_mode,
.set_mode = of_thermal_set_mode,
@ -606,8 +488,6 @@ static struct thermal_zone_device_ops of_thermal_ops = {
.bind = of_thermal_bind,
.unbind = of_thermal_unbind,
.is_wakeable = of_thermal_is_wakeable,
};
static struct thermal_zone_of_device_ops of_virt_ops = {
@ -649,7 +529,6 @@ thermal_zone_of_add_sensor(struct device_node *zone,
if (sens_param->ops->set_emul_temp)
tzd->ops->set_emul_temp = of_thermal_set_emul_temp;
list_add_tail(&tz->list, &sens_param->first_tz);
mutex_unlock(&tzd->lock);
return tzd;
@ -684,9 +563,7 @@ thermal_zone_of_add_sensor(struct device_node *zone,
* that refer to it.
*
* Return: On success returns a valid struct thermal_zone_device,
* otherwise, it returns a corresponding ERR_PTR(). Incase there are multiple
* thermal zones referencing the same sensor, the return value will be
* thermal_zone_device pointer of the first thermal zone. Caller must
* otherwise, it returns a corresponding ERR_PTR(). Caller must
* check the return value with help of IS_ERR() helper.
*/
struct thermal_zone_device *
@ -695,7 +572,6 @@ thermal_zone_of_sensor_register(struct device *dev, int sensor_id, void *data,
{
struct device_node *np, *child, *sensor_np;
struct thermal_zone_device *tzd = ERR_PTR(-ENODEV);
struct thermal_zone_device *first_tzd = NULL;
struct __sensor_param *sens_param = NULL;
np = of_find_node_by_name(NULL, "thermal-zones");
@ -714,16 +590,11 @@ thermal_zone_of_sensor_register(struct device *dev, int sensor_id, void *data,
}
sens_param->sensor_data = data;
sens_param->ops = ops;
INIT_LIST_HEAD(&sens_param->first_tz);
sens_param->trip_high = INT_MAX;
sens_param->trip_low = INT_MIN;
mutex_init(&sens_param->lock);
sensor_np = of_node_get(dev->of_node);
for_each_available_child_of_node(np, child) {
struct of_phandle_args sensor_specs;
int ret, id;
struct __thermal_zone *tz;
/* For now, thermal framework supports only 1 sensor per zone */
ret = of_parse_phandle_with_args(child, "thermal-sensors",
@ -744,25 +615,22 @@ thermal_zone_of_sensor_register(struct device *dev, int sensor_id, void *data,
if (sensor_specs.np == sensor_np && id == sensor_id) {
tzd = thermal_zone_of_add_sensor(child, sensor_np,
sens_param);
if (!IS_ERR(tzd)) {
if (!first_tzd)
first_tzd = tzd;
tz = tzd->devdata;
if (!tz->default_disable)
tzd->ops->set_mode(tzd,
THERMAL_DEVICE_ENABLED);
}
if (!IS_ERR(tzd))
tzd->ops->set_mode(tzd, THERMAL_DEVICE_ENABLED);
of_node_put(sensor_specs.np);
of_node_put(child);
goto exit;
}
of_node_put(sensor_specs.np);
}
exit:
of_node_put(sensor_np);
of_node_put(np);
if (!first_tzd) {
first_tzd = ERR_PTR(-ENODEV);
if (tzd == ERR_PTR(-ENODEV))
kfree(sens_param);
}
return first_tzd;
return tzd;
}
EXPORT_SYMBOL_GPL(thermal_zone_of_sensor_register);
@ -784,9 +652,7 @@ EXPORT_SYMBOL_GPL(thermal_zone_of_sensor_register);
void thermal_zone_of_sensor_unregister(struct device *dev,
struct thermal_zone_device *tzd)
{
struct __thermal_zone *tz, *next;
struct thermal_zone_device *pos;
struct list_head *head;
struct __thermal_zone *tz;
if (!dev || !tzd || !tzd->devdata)
return;
@ -797,20 +663,14 @@ void thermal_zone_of_sensor_unregister(struct device *dev,
if (!tz)
return;
head = &tz->senps->first_tz;
list_for_each_entry_safe(tz, next, head, list) {
pos = tz->tzd;
mutex_lock(&pos->lock);
pos->ops->get_temp = NULL;
pos->ops->get_trend = NULL;
pos->ops->set_emul_temp = NULL;
mutex_lock(&tzd->lock);
tzd->ops->get_temp = NULL;
tzd->ops->get_trend = NULL;
tzd->ops->set_emul_temp = NULL;
list_del(&tz->list);
if (list_empty(&tz->senps->first_tz))
kfree(tz->senps);
tz->senps = NULL;
mutex_unlock(&pos->lock);
}
kfree(tz->senps);
tz->senps = NULL;
mutex_unlock(&tzd->lock);
}
EXPORT_SYMBOL_GPL(thermal_zone_of_sensor_unregister);
@ -1211,7 +1071,6 @@ __init *thermal_of_build_thermal_zone(struct device_node *np)
if (!tz)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&tz->list);
ret = of_property_read_u32(np, "polling-delay-passive", &prop);
if (ret < 0) {
pr_err("missing polling-delay-passive property\n");
@ -1226,10 +1085,6 @@ __init *thermal_of_build_thermal_zone(struct device_node *np)
}
tz->polling_delay = prop;
tz->is_wakeable = of_property_read_bool(np,
"wake-capable-sensor");
tz->default_disable = of_property_read_bool(np,
"disable-thermal-zone");
/*
* REVIST: for now, the thermal framework supports only
* one sensor per thermal zone. Thus, we are considering
@ -1406,9 +1261,7 @@ int __init of_parse_thermal_zones(void)
kfree(ops);
of_thermal_free_zone(tz);
/* attempting to build remaining zones still */
continue;
}
tz->tzd = zone;
}
of_node_put(np);

View file

@ -424,23 +424,6 @@ static void handle_thermal_trip(struct thermal_zone_device *tz, int trip)
monitor_thermal_zone(tz);
}
static void store_temperature(struct thermal_zone_device *tz, int temp)
{
mutex_lock(&tz->lock);
tz->last_temperature = tz->temperature;
tz->temperature = temp;
mutex_unlock(&tz->lock);
trace_thermal_temperature(tz);
if (tz->last_temperature == THERMAL_TEMP_INVALID ||
tz->last_temperature == THERMAL_TEMP_INVALID_LOW)
dev_dbg(&tz->device, "last_temperature N/A, current_temperature=%d\n",
tz->temperature);
else
dev_dbg(&tz->device, "last_temperature=%d, current_temperature=%d\n",
tz->last_temperature, tz->temperature);
}
static void update_temperature(struct thermal_zone_device *tz)
{
int temp, ret;
@ -453,7 +436,19 @@ static void update_temperature(struct thermal_zone_device *tz)
ret);
return;
}
store_temperature(tz, temp);
mutex_lock(&tz->lock);
tz->last_temperature = tz->temperature;
tz->temperature = temp;
mutex_unlock(&tz->lock);
trace_thermal_temperature(tz);
if (tz->last_temperature == THERMAL_TEMP_INVALID)
dev_dbg(&tz->device, "last_temperature N/A, current_temperature=%d\n",
tz->temperature);
else
dev_dbg(&tz->device, "last_temperature=%d, current_temperature=%d\n",
tz->last_temperature, tz->temperature);
}
static void thermal_zone_device_init(struct thermal_zone_device *tz)
@ -470,38 +465,12 @@ static void thermal_zone_device_reset(struct thermal_zone_device *tz)
thermal_zone_device_init(tz);
}
void thermal_zone_device_update_temp(struct thermal_zone_device *tz,
enum thermal_notify_event event, int temp)
{
int count;
if (!tz || !tz->ops)
return;
if (atomic_read(&in_suspend) && (!tz->ops->is_wakeable ||
!(tz->ops->is_wakeable(tz))))
return;
store_temperature(tz, temp);
thermal_zone_set_trips(tz);
tz->notify_event = event;
for (count = 0; count < tz->trips; count++)
handle_thermal_trip(tz, count);
}
void thermal_zone_device_update(struct thermal_zone_device *tz,
enum thermal_notify_event event)
{
int count;
if (!tz || !tz->ops)
return;
if (atomic_read(&in_suspend) && (!tz->ops->is_wakeable ||
!(tz->ops->is_wakeable(tz))))
if (atomic_read(&in_suspend))
return;
if (!tz->ops->get_temp)
@ -1575,9 +1544,6 @@ static int thermal_pm_notify(struct notifier_block *nb,
case PM_POST_SUSPEND:
atomic_set(&in_suspend, 0);
list_for_each_entry(tz, &thermal_tz_list, node) {
if (tz->ops && tz->ops->is_wakeable &&
tz->ops->is_wakeable(tz))
continue;
thermal_zone_device_init(tz);
thermal_zone_device_update(tz,
THERMAL_EVENT_UNSPECIFIED);

View file

@ -124,9 +124,6 @@ int of_thermal_get_ntrips(struct thermal_zone_device *);
bool of_thermal_is_trip_valid(struct thermal_zone_device *, int);
const struct thermal_trip *
of_thermal_get_trip_points(struct thermal_zone_device *);
void of_thermal_handle_trip(struct thermal_zone_device *tz);
void of_thermal_handle_trip_temp(struct thermal_zone_device *tz,
int trip_temp);
#else
static inline int of_parse_thermal_zones(void) { return 0; }
static inline void of_thermal_destroy_zones(void) { }
@ -144,13 +141,6 @@ of_thermal_get_trip_points(struct thermal_zone_device *tz)
{
return NULL;
}
static inline
void of_thermal_handle_trip(struct thermal_zone_device *tz)
{ }
static inline
void of_thermal_handle_trip_temp(struct thermal_zone_device *tz,
int trip_temp)
{ }
#endif
#endif /* __THERMAL_CORE_H__ */

View file

@ -1086,15 +1086,6 @@ int usb_get_bos_descriptor(struct usb_device *dev)
case USB_PTM_CAP_TYPE:
dev->bos->ptm_cap =
(struct usb_ptm_cap_descriptor *)buffer;
break;
case USB_CAP_TYPE_CONFIG_SUMMARY:
/* one such desc per function */
if (!dev->bos->num_config_summary_desc)
dev->bos->config_summary =
(struct usb_config_summary_descriptor *)buffer;
dev->bos->num_config_summary_desc++;
break;
default:
break;
}

View file

@ -1458,9 +1458,6 @@ int usb_suspend(struct device *dev, pm_message_t msg)
struct usb_device *udev = to_usb_device(dev);
int r;
if (udev->bus->skip_resume && udev->state == USB_STATE_SUSPENDED)
return 0;
unbind_no_pm_drivers_interfaces(udev);
/* From now on we are sure all drivers support suspend/resume
@ -1497,15 +1494,6 @@ int usb_resume(struct device *dev, pm_message_t msg)
struct usb_device *udev = to_usb_device(dev);
int status;
/*
* Some buses would like to keep their devices in suspend
* state after system resume. Their resume happen when
* a remote wakeup is detected or interface driver start
* I/O.
*/
if (udev->bus->skip_resume)
return 0;
/* For all calls, take the device back to full power and
* tell the PM core in case it was autosuspended previously.
* Unbind the interfaces that will need rebinding later,

View file

@ -42,36 +42,6 @@ static int is_activesync(struct usb_interface_descriptor *desc)
&& desc->bInterfaceProtocol == 1;
}
static int get_usb_audio_config(struct usb_host_bos *bos)
{
unsigned int desc_cnt, num_cfg_desc, len = 0;
unsigned char *buffer;
struct usb_config_summary_descriptor *conf_summary;
if (!bos || !bos->config_summary)
goto done;
num_cfg_desc = bos->num_config_summary_desc;
conf_summary = bos->config_summary;
buffer = (unsigned char *)conf_summary;
for (desc_cnt = 0; desc_cnt < num_cfg_desc; desc_cnt++) {
conf_summary =
(struct usb_config_summary_descriptor *)(buffer + len);
len += conf_summary->bLength;
if (conf_summary->bcdVersion != USB_CONFIG_SUMMARY_DESC_REV ||
conf_summary->bClass != USB_CLASS_AUDIO)
continue;
/* return 1st config as per device preference */
return conf_summary->bConfigurationIndex[0];
}
done:
return -EINVAL;
}
int usb_choose_configuration(struct usb_device *udev)
{
int i;
@ -175,10 +145,7 @@ int usb_choose_configuration(struct usb_device *udev)
insufficient_power, plural(insufficient_power));
if (best) {
/* choose device preferred config */
i = get_usb_audio_config(udev->bos);
if (i < 0)
i = best->desc.bConfigurationValue;
i = best->desc.bConfigurationValue;
dev_dbg(&udev->dev,
"configuration #%d chosen from %d choice%s\n",
i, num_configs, plural(num_configs));

View file

@ -2233,54 +2233,8 @@ int usb_hcd_get_frame_number (struct usb_device *udev)
return hcd->driver->get_frame_number (hcd);
}
int usb_hcd_sec_event_ring_setup(struct usb_device *udev,
unsigned int intr_num)
{
struct usb_hcd *hcd = bus_to_hcd(udev->bus);
if (!HCD_RH_RUNNING(hcd))
return 0;
return hcd->driver->sec_event_ring_setup(hcd, intr_num);
}
int usb_hcd_sec_event_ring_cleanup(struct usb_device *udev,
unsigned int intr_num)
{
struct usb_hcd *hcd = bus_to_hcd(udev->bus);
if (!HCD_RH_RUNNING(hcd))
return 0;
return hcd->driver->sec_event_ring_cleanup(hcd, intr_num);
}
/*-------------------------------------------------------------------------*/
phys_addr_t
usb_hcd_get_sec_event_ring_phys_addr(struct usb_device *udev,
unsigned int intr_num, dma_addr_t *dma)
{
struct usb_hcd *hcd = bus_to_hcd(udev->bus);
if (!HCD_RH_RUNNING(hcd))
return 0;
return hcd->driver->get_sec_event_ring_phys_addr(hcd, intr_num, dma);
}
phys_addr_t
usb_hcd_get_xfer_ring_phys_addr(struct usb_device *udev,
struct usb_host_endpoint *ep, dma_addr_t *dma)
{
struct usb_hcd *hcd = bus_to_hcd(udev->bus);
if (!HCD_RH_RUNNING(hcd))
return 0;
return hcd->driver->get_xfer_ring_phys_addr(hcd, udev, ep, dma);
}
int usb_hcd_get_controller_id(struct usb_device *udev)
{
struct usb_hcd *hcd = bus_to_hcd(udev->bus);
@ -2291,14 +2245,6 @@ int usb_hcd_get_controller_id(struct usb_device *udev)
return hcd->driver->get_core_id(hcd);
}
int usb_hcd_stop_endpoint(struct usb_device *udev,
struct usb_host_endpoint *ep)
{
struct usb_hcd *hcd = bus_to_hcd(udev->bus);
return hcd->driver->stop_endpoint(hcd, udev, ep);
}
#ifdef CONFIG_PM
int hcd_bus_suspend(struct usb_device *rhdev, pm_message_t msg)
@ -2567,7 +2513,6 @@ void usb_hc_died (struct usb_hcd *hcd)
}
spin_unlock_irqrestore (&hcd_root_hub_lock, flags);
/* Make sure that the other roothub is also deallocated. */
usb_atomic_notify_dead_bus(&hcd->self);
}
EXPORT_SYMBOL_GPL (usb_hc_died);

View file

@ -19,7 +19,6 @@
#include "usb.h"
static BLOCKING_NOTIFIER_HEAD(usb_notifier_list);
static ATOMIC_NOTIFIER_HEAD(usb_atomic_notifier_list);
/**
* usb_register_notify - register a notifier callback whenever a usb change happens
@ -70,33 +69,3 @@ void usb_notify_remove_bus(struct usb_bus *ubus)
{
blocking_notifier_call_chain(&usb_notifier_list, USB_BUS_REMOVE, ubus);
}
/**
* usb_register_atomic_notify - register a atomic notifier callback whenever a
* HC dies
* @nb: pointer to the atomic notifier block for the callback events.
*
*/
void usb_register_atomic_notify(struct notifier_block *nb)
{
atomic_notifier_chain_register(&usb_atomic_notifier_list, nb);
}
EXPORT_SYMBOL_GPL(usb_register_atomic_notify);
/**
* usb_unregister_atomic_notify - unregister a atomic notifier callback
* @nb: pointer to the notifier block for the callback events.
*
*/
void usb_unregister_atomic_notify(struct notifier_block *nb)
{
atomic_notifier_chain_unregister(&usb_atomic_notifier_list, nb);
}
EXPORT_SYMBOL_GPL(usb_unregister_atomic_notify);
void usb_atomic_notify_dead_bus(struct usb_bus *ubus)
{
atomic_notifier_call_chain(&usb_atomic_notifier_list, USB_BUS_DIED,
ubus);
}

View file

@ -825,44 +825,6 @@ int usb_get_current_frame_number(struct usb_device *dev)
}
EXPORT_SYMBOL_GPL(usb_get_current_frame_number);
int usb_sec_event_ring_setup(struct usb_device *dev,
unsigned int intr_num)
{
if (dev->state == USB_STATE_NOTATTACHED)
return 0;
return usb_hcd_sec_event_ring_setup(dev, intr_num);
}
EXPORT_SYMBOL(usb_sec_event_ring_setup);
int usb_sec_event_ring_cleanup(struct usb_device *dev,
unsigned int intr_num)
{
return usb_hcd_sec_event_ring_cleanup(dev, intr_num);
}
EXPORT_SYMBOL(usb_sec_event_ring_cleanup);
phys_addr_t
usb_get_sec_event_ring_phys_addr(struct usb_device *dev,
unsigned int intr_num, dma_addr_t *dma)
{
if (dev->state == USB_STATE_NOTATTACHED)
return 0;
return usb_hcd_get_sec_event_ring_phys_addr(dev, intr_num, dma);
}
EXPORT_SYMBOL_GPL(usb_get_sec_event_ring_phys_addr);
phys_addr_t usb_get_xfer_ring_phys_addr(struct usb_device *dev,
struct usb_host_endpoint *ep, dma_addr_t *dma)
{
if (dev->state == USB_STATE_NOTATTACHED)
return 0;
return usb_hcd_get_xfer_ring_phys_addr(dev, ep, dma);
}
EXPORT_SYMBOL_GPL(usb_get_xfer_ring_phys_addr);
/**
* usb_get_controller_id - returns the host controller id.
* @dev: the device whose host controller id is being queried.
@ -876,12 +838,6 @@ int usb_get_controller_id(struct usb_device *dev)
}
EXPORT_SYMBOL_GPL(usb_get_controller_id);
int usb_stop_endpoint(struct usb_device *dev, struct usb_host_endpoint *ep)
{
return usb_hcd_stop_endpoint(dev, ep);
}
EXPORT_SYMBOL_GPL(usb_stop_endpoint);
/*-------------------------------------------------------------------*/
/*
* __usb_get_extra_descriptor() finds a descriptor of specific type in the

View file

@ -192,7 +192,6 @@ extern void usb_notify_add_device(struct usb_device *udev);
extern void usb_notify_remove_device(struct usb_device *udev);
extern void usb_notify_add_bus(struct usb_bus *ubus);
extern void usb_notify_remove_bus(struct usb_bus *ubus);
extern void usb_atomic_notify_dead_bus(struct usb_bus *ubus);
extern void usb_hub_adjust_deviceremovable(struct usb_device *hdev,
struct usb_hub_descriptor *desc);

View file

@ -981,9 +981,6 @@ static int dwc3_core_init(struct dwc3 *dwc)
if (dwc->dis_tx_ipgap_linecheck_quirk)
reg |= DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS;
if (dwc->parkmode_disable_ss_quirk)
reg |= DWC3_GUCTL1_PARKMODE_DISABLE_SS;
dwc3_writel(dwc->regs, DWC3_GUCTL1, reg);
}
@ -1290,8 +1287,6 @@ static void dwc3_get_properties(struct dwc3 *dwc)
"snps,dis-del-phy-power-chg-quirk");
dwc->dis_tx_ipgap_linecheck_quirk = device_property_read_bool(dev,
"snps,dis-tx-ipgap-linecheck-quirk");
dwc->parkmode_disable_ss_quirk = device_property_read_bool(dev,
"snps,parkmode-disable-ss-quirk");
dwc->tx_de_emphasis_quirk = device_property_read_bool(dev,
"snps,tx_de_emphasis_quirk");

View file

@ -242,7 +242,6 @@
#define DWC3_GUCTL_HSTINAUTORETRY BIT(14)
/* Global User Control 1 Register */
#define DWC3_GUCTL1_PARKMODE_DISABLE_SS BIT(17)
#define DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS BIT(28)
#define DWC3_GUCTL1_DEV_L1_EXIT_BY_HW BIT(24)
@ -300,10 +299,6 @@
#define DWC3_GTXFIFOSIZ_TXFDEF(n) ((n) & 0xffff)
#define DWC3_GTXFIFOSIZ_TXFSTADDR(n) ((n) & 0xffff0000)
/* Global RX Fifo Size Register */
#define DWC31_GRXFIFOSIZ_RXFDEP(n) ((n) & 0x7fff) /* DWC_usb31 only */
#define DWC3_GRXFIFOSIZ_RXFDEP(n) ((n) & 0xffff)
/* Global Event Size Registers */
#define DWC3_GEVNTSIZ_INTMASK BIT(31)
#define DWC3_GEVNTSIZ_SIZE(n) ((n) & 0xffff)
@ -997,8 +992,6 @@ struct dwc3_scratchpad_array {
* change quirk.
* @dis_tx_ipgap_linecheck_quirk: set if we disable u2mac linestate
* check during HS transmit.
* @parkmode_disable_ss_quirk: set if we need to disable all SuperSpeed
* instances in park mode.
* @tx_de_emphasis_quirk: set if we enable Tx de-emphasis quirk
* @tx_de_emphasis: Tx de-emphasis value
* 0 - -6dB de-emphasis
@ -1170,7 +1163,6 @@ struct dwc3 {
unsigned dis_u2_freeclk_exists_quirk:1;
unsigned dis_del_phy_power_chg_quirk:1;
unsigned dis_tx_ipgap_linecheck_quirk:1;
unsigned parkmode_disable_ss_quirk:1;
unsigned tx_de_emphasis_quirk:1;
unsigned tx_de_emphasis:2;

View file

@ -688,13 +688,12 @@ out:
return 0;
}
static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
bool interrupt);
static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force);
static void dwc3_remove_requests(struct dwc3 *dwc, struct dwc3_ep *dep)
{
struct dwc3_request *req;
dwc3_stop_active_transfer(dep, true, false);
dwc3_stop_active_transfer(dep, true);
/* - giveback all requests to gadget driver */
while (!list_empty(&dep->started_list)) {
@ -1370,7 +1369,7 @@ static void dwc3_gadget_ep_skip_trbs(struct dwc3_ep *dep, struct dwc3_request *r
for (i = 0; i < req->num_trbs; i++) {
struct dwc3_trb *trb;
trb = &dep->trb_pool[dep->trb_dequeue];
trb = req->trb + i;
trb->ctrl &= ~DWC3_TRB_CTRL_HWO;
dwc3_ep_inc_deq(dep);
}
@ -1417,7 +1416,7 @@ static int dwc3_gadget_ep_dequeue(struct usb_ep *ep,
}
if (r == req) {
/* wait until it is processed */
dwc3_stop_active_transfer(dep, true, true);
dwc3_stop_active_transfer(dep, true);
if (!r->trb)
goto out0;
@ -1577,6 +1576,7 @@ static int __dwc3_gadget_wakeup(struct dwc3 *dwc)
u32 reg;
u8 link_state;
u8 speed;
/*
* According to the Databook Remote wakeup request should
@ -1586,13 +1586,16 @@ static int __dwc3_gadget_wakeup(struct dwc3 *dwc)
*/
reg = dwc3_readl(dwc->regs, DWC3_DSTS);
speed = reg & DWC3_DSTS_CONNECTSPD;
if ((speed == DWC3_DSTS_SUPERSPEED) ||
(speed == DWC3_DSTS_SUPERSPEED_PLUS))
return 0;
link_state = DWC3_DSTS_USBLNKST(reg);
switch (link_state) {
case DWC3_LINK_STATE_RESET:
case DWC3_LINK_STATE_RX_DET: /* in HS, means Early Suspend */
case DWC3_LINK_STATE_U3: /* in HS, means SUSPEND */
case DWC3_LINK_STATE_RESUME:
break;
default:
return -EINVAL;
@ -2032,6 +2035,7 @@ static int dwc3_gadget_init_in_endpoint(struct dwc3_ep *dep)
{
struct dwc3 *dwc = dep->dwc;
int mdwidth;
int kbytes;
int size;
mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);
@ -2047,17 +2051,17 @@ static int dwc3_gadget_init_in_endpoint(struct dwc3_ep *dep)
/* FIFO Depth is in MDWDITH bytes. Multiply */
size *= mdwidth;
kbytes = size / 1024;
if (kbytes == 0)
kbytes = 1;
/*
* To meet performance requirement, a minimum TxFIFO size of 3x
* MaxPacketSize is recommended for endpoints that support burst and a
* minimum TxFIFO size of 2x MaxPacketSize for endpoints that don't
* support burst. Use those numbers and we can calculate the max packet
* limit as below.
* FIFO sizes account an extra MDWIDTH * (kbytes + 1) bytes for
* internal overhead. We don't really know how these are used,
* but documentation say it exists.
*/
if (dwc->maximum_speed >= USB_SPEED_SUPER)
size /= 3;
else
size /= 2;
size -= mdwidth * (kbytes + 1);
size /= kbytes;
usb_ep_set_maxpacket_limit(&dep->endpoint, size);
@ -2075,39 +2079,8 @@ static int dwc3_gadget_init_in_endpoint(struct dwc3_ep *dep)
static int dwc3_gadget_init_out_endpoint(struct dwc3_ep *dep)
{
struct dwc3 *dwc = dep->dwc;
int mdwidth;
int size;
mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);
/* MDWIDTH is represented in bits, convert to bytes */
mdwidth /= 8;
/* All OUT endpoints share a single RxFIFO space */
size = dwc3_readl(dwc->regs, DWC3_GRXFIFOSIZ(0));
if (dwc3_is_usb31(dwc))
size = DWC31_GRXFIFOSIZ_RXFDEP(size);
else
size = DWC3_GRXFIFOSIZ_RXFDEP(size);
/* FIFO depth is in MDWDITH bytes */
size *= mdwidth;
/*
* To meet performance requirement, a minimum recommended RxFIFO size
* is defined as follow:
* RxFIFO size >= (3 x MaxPacketSize) +
* (3 x 8 bytes setup packets size) + (16 bytes clock crossing margin)
*
* Then calculate the max packet limit as below.
*/
size -= (3 * 8) + 16;
if (size < 0)
size = 0;
else
size /= 3;
usb_ep_set_maxpacket_limit(&dep->endpoint, size);
usb_ep_set_maxpacket_limit(&dep->endpoint, 1024);
dep->endpoint.max_streams = 15;
dep->endpoint.ops = &dwc3_gadget_ep_ops;
list_add_tail(&dep->endpoint.ep_list,
@ -2303,7 +2276,14 @@ static int dwc3_gadget_ep_reclaim_trb_linear(struct dwc3_ep *dep,
static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req)
{
return req->num_pending_sgs == 0;
/*
* For OUT direction, host may send less than the setup
* length. Return true for all OUT requests.
*/
if (!req->direction)
return true;
return req->request.actual == req->request.length;
}
static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
@ -2327,7 +2307,8 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
req->request.actual = req->request.length - req->remaining;
if (!dwc3_gadget_ep_request_completed(req)) {
if (!dwc3_gadget_ep_request_completed(req) ||
req->num_pending_sgs) {
__dwc3_gadget_kick_transfer(dep);
goto out;
}
@ -2381,8 +2362,10 @@ static void dwc3_gadget_endpoint_transfer_in_progress(struct dwc3_ep *dep,
dwc3_gadget_ep_cleanup_completed_requests(dep, event, status);
if (stop)
dwc3_stop_active_transfer(dep, true, true);
if (stop) {
dwc3_stop_active_transfer(dep, true);
dep->flags = DWC3_EP_ENABLED;
}
/*
* WORKAROUND: This is the 2nd half of U1/U2 -> U0 workaround.
@ -2502,8 +2485,7 @@ static void dwc3_reset_gadget(struct dwc3 *dwc)
}
}
static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
bool interrupt)
static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force)
{
struct dwc3 *dwc = dep->dwc;
struct dwc3_gadget_ep_cmd_params params;
@ -2547,7 +2529,7 @@ static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
cmd = DWC3_DEPCMD_ENDTRANSFER;
cmd |= force ? DWC3_DEPCMD_HIPRI_FORCERM : 0;
cmd |= interrupt ? DWC3_DEPCMD_CMDIOC : 0;
cmd |= DWC3_DEPCMD_CMDIOC;
cmd |= DWC3_DEPCMD_PARAM(dep->resource_index);
memset(&params, 0, sizeof(params));
ret = dwc3_send_gadget_ep_cmd(dep, cmd, &params);
@ -3181,6 +3163,7 @@ int dwc3_gadget_init(struct dwc3 *dwc)
dwc->gadget.speed = USB_SPEED_UNKNOWN;
dwc->gadget.sg_supported = true;
dwc->gadget.name = "dwc3-gadget";
dwc->gadget.is_otg = dwc->dr_mode == USB_DR_MODE_OTG;
/*
* FIXME We might be setting max_speed to <SUPER, however versions

View file

@ -506,22 +506,6 @@ out:
}
EXPORT_SYMBOL_GPL(usb_gadget_wakeup);
/**
* usb_gsi_ep_op - performs operation on GSI accelerated EP based on EP op code
*
* Operations such as EP configuration, TRB allocation, StartXfer etc.
* See gsi_ep_op for more details.
*/
int usb_gsi_ep_op(struct usb_ep *ep,
struct usb_gsi_request *req, enum gsi_ep_op op)
{
if (ep && ep->ops && ep->ops->gsi_ep_op)
return ep->ops->gsi_ep_op(ep, req, op);
return -EOPNOTSUPP;
}
EXPORT_SYMBOL_GPL(usb_gsi_ep_op);
/**
* usb_gadget_func_wakeup - send a function remote wakeup up notification
* to the host connected to this gadget

View file

@ -1837,137 +1837,6 @@ void xhci_free_erst(struct xhci_hcd *xhci, struct xhci_erst *erst)
erst->entries = NULL;
}
void xhci_handle_sec_intr_events(struct xhci_hcd *xhci, int intr_num)
{
union xhci_trb *erdp_trb, *current_trb;
struct xhci_segment *seg;
u64 erdp_reg;
u32 iman_reg;
dma_addr_t deq;
unsigned long segment_offset;
/* disable irq, ack pending interrupt and ack all pending events */
iman_reg =
readl_relaxed(&xhci->sec_ir_set[intr_num]->irq_pending);
iman_reg &= ~IMAN_IE;
writel_relaxed(iman_reg,
&xhci->sec_ir_set[intr_num]->irq_pending);
iman_reg =
readl_relaxed(&xhci->sec_ir_set[intr_num]->irq_pending);
if (iman_reg & IMAN_IP)
writel_relaxed(iman_reg,
&xhci->sec_ir_set[intr_num]->irq_pending);
/* last acked event trb is in erdp reg */
erdp_reg =
xhci_read_64(xhci, &xhci->sec_ir_set[intr_num]->erst_dequeue);
deq = (dma_addr_t)(erdp_reg & ~ERST_PTR_MASK);
if (!deq) {
pr_debug("%s: event ring handling not required\n", __func__);
return;
}
seg = xhci->sec_event_ring[intr_num]->first_seg;
segment_offset = deq - seg->dma;
/* find out virtual address of the last acked event trb */
erdp_trb = current_trb = &seg->trbs[0] +
(segment_offset/sizeof(*current_trb));
/* read cycle state of the last acked trb to find out CCS */
xhci->sec_event_ring[intr_num]->cycle_state =
(current_trb->event_cmd.flags & TRB_CYCLE);
while (1) {
/* last trb of the event ring: toggle cycle state */
if (current_trb == &seg->trbs[TRBS_PER_SEGMENT - 1]) {
xhci->sec_event_ring[intr_num]->cycle_state ^= 1;
current_trb = &seg->trbs[0];
} else {
current_trb++;
}
/* cycle state transition */
if ((le32_to_cpu(current_trb->event_cmd.flags) & TRB_CYCLE) !=
xhci->sec_event_ring[intr_num]->cycle_state)
break;
}
if (erdp_trb != current_trb) {
deq =
xhci_trb_virt_to_dma(xhci->sec_event_ring[intr_num]->deq_seg,
current_trb);
if (deq == 0)
xhci_warn(xhci,
"WARN invalid SW event ring dequeue ptr.\n");
/* Update HC event ring dequeue pointer */
erdp_reg &= ERST_PTR_MASK;
erdp_reg |= ((u64) deq & (u64) ~ERST_PTR_MASK);
}
/* Clear the event handler busy flag (RW1C); event ring is empty. */
erdp_reg |= ERST_EHB;
xhci_write_64(xhci, erdp_reg,
&xhci->sec_ir_set[intr_num]->erst_dequeue);
}
int xhci_sec_event_ring_cleanup(struct usb_hcd *hcd, unsigned int intr_num)
{
int size;
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
if (intr_num >= xhci->max_interrupters) {
xhci_err(xhci, "invalid secondary interrupter num %d\n",
intr_num);
return -EINVAL;
}
size =
sizeof(struct xhci_erst_entry)*(xhci->sec_erst[intr_num].num_entries);
if (xhci->sec_erst[intr_num].entries) {
xhci_handle_sec_intr_events(xhci, intr_num);
dma_free_coherent(dev, size, xhci->sec_erst[intr_num].entries,
xhci->sec_erst[intr_num].erst_dma_addr);
xhci->sec_erst[intr_num].entries = NULL;
}
xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Freed SEC ERST#%d",
intr_num);
if (xhci->sec_event_ring[intr_num])
xhci_ring_free(xhci, xhci->sec_event_ring[intr_num]);
xhci->sec_event_ring[intr_num] = NULL;
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"Freed sec event ring");
return 0;
}
void xhci_event_ring_cleanup(struct xhci_hcd *xhci)
{
unsigned int i;
/* sec event ring clean up */
for (i = 1; i < xhci->max_interrupters; i++)
xhci_sec_event_ring_cleanup(xhci_to_hcd(xhci), i);
kfree(xhci->sec_ir_set);
xhci->sec_ir_set = NULL;
kfree(xhci->sec_erst);
xhci->sec_erst = NULL;
kfree(xhci->sec_event_ring);
xhci->sec_event_ring = NULL;
/* primary event ring clean up */
xhci_free_erst(xhci, &xhci->erst);
xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Freed primary ERST");
if (xhci->event_ring)
xhci_ring_free(xhci, xhci->event_ring);
xhci->event_ring = NULL;
xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Freed priamry event ring");
}
void xhci_mem_cleanup(struct xhci_hcd *xhci)
{
struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
@ -1975,7 +1844,12 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
cancel_delayed_work_sync(&xhci->cmd_timer);
xhci_event_ring_cleanup(xhci);
xhci_free_erst(xhci, &xhci->erst);
if (xhci->event_ring)
xhci_ring_free(xhci, xhci->event_ring);
xhci->event_ring = NULL;
xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Freed event ring");
if (xhci->lpm_command)
xhci_free_command(xhci, xhci->lpm_command);
@ -2220,6 +2094,30 @@ static int xhci_check_trb_in_td_math(struct xhci_hcd *xhci)
return 0;
}
static void xhci_set_hc_event_deq(struct xhci_hcd *xhci)
{
u64 temp;
dma_addr_t deq;
deq = xhci_trb_virt_to_dma(xhci->event_ring->deq_seg,
xhci->event_ring->dequeue);
if (deq == 0 && !in_interrupt())
xhci_warn(xhci, "WARN something wrong with SW event ring "
"dequeue ptr.\n");
/* Update HC event ring dequeue pointer */
temp = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue);
temp &= ERST_PTR_MASK;
/* Don't clear the EHB bit (which is RW1C) because
* there might be more events to service.
*/
temp &= ~ERST_EHB;
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"// Write event ring dequeue pointer, "
"preserving EHB bit");
xhci_write_64(xhci, ((u64) deq & (u64) ~ERST_PTR_MASK) | temp,
&xhci->ir_set->erst_dequeue);
}
static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
__le32 __iomem *addr, int max_caps)
{
@ -2483,154 +2381,6 @@ static int xhci_setup_port_arrays(struct xhci_hcd *xhci, gfp_t flags)
return 0;
}
int xhci_event_ring_setup(struct xhci_hcd *xhci, struct xhci_ring **er,
struct xhci_intr_reg __iomem *ir_set, struct xhci_erst *erst,
unsigned int intr_num, gfp_t flags)
{
dma_addr_t deq;
u64 val_64;
unsigned int val;
int ret;
*er = xhci_ring_alloc(xhci, ERST_NUM_SEGS, 1, TYPE_EVENT, 0, flags);
if (!*er)
return -ENOMEM;
ret = xhci_alloc_erst(xhci, *er, erst, flags);
if (ret)
return ret;
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"intr# %d: num segs = %i, virt addr = %pK, dma addr = 0x%llx",
intr_num,
erst->num_entries,
erst->entries,
(unsigned long long)erst->erst_dma_addr);
/* set ERST count with the number of entries in the segment table */
val = readl_relaxed(&ir_set->erst_size);
val &= ERST_SIZE_MASK;
val |= ERST_NUM_SEGS;
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"Write ERST size = %i to ir_set %d (some bits preserved)", val,
intr_num);
writel_relaxed(val, &ir_set->erst_size);
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"intr# %d: Set ERST entries to point to event ring.",
intr_num);
/* set the segment table base address */
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"Set ERST base address for ir_set %d = 0x%llx",
intr_num,
(unsigned long long)erst->erst_dma_addr);
val_64 = xhci_read_64(xhci, &ir_set->erst_base);
val_64 &= ERST_PTR_MASK;
val_64 |= (erst->erst_dma_addr & (u64) ~ERST_PTR_MASK);
xhci_write_64(xhci, val_64, &ir_set->erst_base);
/* Set the event ring dequeue address */
deq = xhci_trb_virt_to_dma((*er)->deq_seg, (*er)->dequeue);
if (deq == 0 && !in_interrupt())
xhci_warn(xhci,
"intr# %d:WARN something wrong with SW event ring deq ptr.\n",
intr_num);
/* Update HC event ring dequeue pointer */
val_64 = xhci_read_64(xhci, &ir_set->erst_dequeue);
val_64 &= ERST_PTR_MASK;
/* Don't clear the EHB bit (which is RW1C) because
* there might be more events to service.
*/
val_64 &= ~ERST_EHB;
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"intr# %d:Write event ring dequeue pointer, preserving EHB bit",
intr_num);
xhci_write_64(xhci, ((u64) deq & (u64) ~ERST_PTR_MASK) | val_64,
&ir_set->erst_dequeue);
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"Wrote ERST address to ir_set %d.", intr_num);
return 0;
}
int xhci_sec_event_ring_setup(struct usb_hcd *hcd, unsigned int intr_num)
{
int ret;
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
if ((xhci->xhc_state & XHCI_STATE_HALTED) || !xhci->sec_ir_set
|| !xhci->sec_event_ring || !xhci->sec_erst ||
intr_num >= xhci->max_interrupters) {
xhci_err(xhci,
"%s:state %x ir_set %pK evt_ring %pK erst %pK intr# %d\n",
__func__, xhci->xhc_state, xhci->sec_ir_set,
xhci->sec_event_ring, xhci->sec_erst, intr_num);
return -EINVAL;
}
if (xhci->sec_event_ring && xhci->sec_event_ring[intr_num]
&& xhci->sec_event_ring[intr_num]->first_seg)
goto done;
xhci->sec_ir_set[intr_num] = &xhci->run_regs->ir_set[intr_num];
ret = xhci_event_ring_setup(xhci,
&xhci->sec_event_ring[intr_num],
xhci->sec_ir_set[intr_num],
&xhci->sec_erst[intr_num],
intr_num, GFP_KERNEL);
if (ret) {
xhci_err(xhci, "sec event ring setup failed inter#%d\n",
intr_num);
return ret;
}
done:
return 0;
}
int xhci_event_ring_init(struct xhci_hcd *xhci, gfp_t flags)
{
int ret = 0;
/* primary + secondary */
xhci->max_interrupters = HCS_MAX_INTRS(xhci->hcs_params1);
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"// Allocating primary event ring");
/* Set ir_set to interrupt register set 0 */
xhci->ir_set = &xhci->run_regs->ir_set[0];
ret = xhci_event_ring_setup(xhci, &xhci->event_ring, xhci->ir_set,
&xhci->erst, 0, flags);
if (ret) {
xhci_err(xhci, "failed to setup primary event ring\n");
goto fail;
}
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"// Allocating sec event ring related pointers");
xhci->sec_ir_set = kcalloc(xhci->max_interrupters,
sizeof(*xhci->sec_ir_set), flags);
if (!xhci->sec_ir_set) {
ret = -ENOMEM;
goto fail;
}
xhci->sec_event_ring = kcalloc(xhci->max_interrupters,
sizeof(*xhci->sec_event_ring), flags);
if (!xhci->sec_event_ring) {
ret = -ENOMEM;
goto fail;
}
xhci->sec_erst = kcalloc(xhci->max_interrupters,
sizeof(*xhci->sec_erst), flags);
if (!xhci->sec_erst)
ret = -ENOMEM;
fail:
return ret;
}
int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
{
dma_addr_t dma;
@ -2638,7 +2388,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
unsigned int val, val2;
u64 val_64;
u32 page_size, temp;
int i;
int i, ret;
INIT_LIST_HEAD(&xhci->cmd_list);
@ -2759,17 +2509,50 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
"// Doorbell array is located at offset 0x%x"
" from cap regs base addr", val);
xhci->dba = (void __iomem *) xhci->cap_regs + val;
/* Set ir_set to interrupt register set 0 */
xhci->ir_set = &xhci->run_regs->ir_set[0];
/*
* Event ring setup: Allocate a normal ring, but also setup
* the event ring segment table (ERST). Section 4.9.3.
*/
if (xhci_event_ring_init(xhci, GFP_KERNEL))
xhci_dbg_trace(xhci, trace_xhci_dbg_init, "// Allocating event ring");
xhci->event_ring = xhci_ring_alloc(xhci, ERST_NUM_SEGS, 1, TYPE_EVENT,
0, flags);
if (!xhci->event_ring)
goto fail;
if (xhci_check_trb_in_td_math(xhci) < 0)
goto fail;
ret = xhci_alloc_erst(xhci, xhci->event_ring, &xhci->erst, flags);
if (ret)
goto fail;
/* set ERST count with the number of entries in the segment table */
val = readl(&xhci->ir_set->erst_size);
val &= ERST_SIZE_MASK;
val |= ERST_NUM_SEGS;
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"// Write ERST size = %i to ir_set 0 (some bits preserved)",
val);
writel(val, &xhci->ir_set->erst_size);
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"// Set ERST entries to point to event ring.");
/* set the segment table base address */
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"// Set ERST base address for ir_set 0 = 0x%llx",
(unsigned long long)xhci->erst.erst_dma_addr);
val_64 = xhci_read_64(xhci, &xhci->ir_set->erst_base);
val_64 &= ERST_PTR_MASK;
val_64 |= (xhci->erst.erst_dma_addr & (u64) ~ERST_PTR_MASK);
xhci_write_64(xhci, val_64, &xhci->ir_set->erst_base);
/* Set the event ring dequeue address */
xhci_set_hc_event_deq(xhci);
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"Wrote ERST address to ir_set 0.");
/*
* XXX: Might need to set the Interrupter Moderation Register to
* something other than the default (~1ms minimum between interrupts).

View file

@ -5185,136 +5185,6 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks)
}
EXPORT_SYMBOL_GPL(xhci_gen_setup);
static phys_addr_t xhci_get_sec_event_ring_phys_addr(struct usb_hcd *hcd,
unsigned int intr_num, dma_addr_t *dma)
{
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
struct device *dev = hcd->self.sysdev;
struct sg_table sgt;
phys_addr_t pa;
if (intr_num > xhci->max_interrupters) {
xhci_err(xhci, "intr num %d > max intrs %d\n", intr_num,
xhci->max_interrupters);
return 0;
}
if (!(xhci->xhc_state & XHCI_STATE_HALTED) &&
xhci->sec_event_ring && xhci->sec_event_ring[intr_num]
&& xhci->sec_event_ring[intr_num]->first_seg) {
dma_get_sgtable(dev, &sgt,
xhci->sec_event_ring[intr_num]->first_seg->trbs,
xhci->sec_event_ring[intr_num]->first_seg->dma,
TRB_SEGMENT_SIZE);
*dma = xhci->sec_event_ring[intr_num]->first_seg->dma;
pa = page_to_phys(sg_page(sgt.sgl));
sg_free_table(&sgt);
return pa;
}
return 0;
}
static phys_addr_t xhci_get_xfer_ring_phys_addr(struct usb_hcd *hcd,
struct usb_device *udev, struct usb_host_endpoint *ep, dma_addr_t *dma)
{
int ret;
unsigned int ep_index;
struct xhci_virt_device *virt_dev;
struct device *dev = hcd->self.sysdev;
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
struct sg_table sgt;
phys_addr_t pa;
ret = xhci_check_args(hcd, udev, ep, 1, true, __func__);
if (ret <= 0) {
xhci_err(xhci, "%s: invalid args\n", __func__);
return 0;
}
virt_dev = xhci->devs[udev->slot_id];
ep_index = xhci_get_endpoint_index(&ep->desc);
if (virt_dev->eps[ep_index].ring &&
virt_dev->eps[ep_index].ring->first_seg) {
dma_get_sgtable(dev, &sgt,
virt_dev->eps[ep_index].ring->first_seg->trbs,
virt_dev->eps[ep_index].ring->first_seg->dma,
TRB_SEGMENT_SIZE);
*dma = virt_dev->eps[ep_index].ring->first_seg->dma;
pa = page_to_phys(sg_page(sgt.sgl));
sg_free_table(&sgt);
return pa;
}
return 0;
}
static int xhci_stop_endpoint(struct usb_hcd *hcd,
struct usb_device *udev, struct usb_host_endpoint *ep)
{
struct xhci_hcd *xhci;
unsigned int ep_index;
struct xhci_virt_device *virt_dev;
struct xhci_command *cmd;
unsigned long flags;
int ret = 0;
if (!hcd || !udev || !ep)
return -EINVAL;
xhci = hcd_to_xhci(hcd);
cmd = xhci_alloc_command(xhci, true, GFP_NOIO);
if (!cmd)
return -ENOMEM;
spin_lock_irqsave(&xhci->lock, flags);
virt_dev = xhci->devs[udev->slot_id];
if (!virt_dev) {
ret = -ENODEV;
goto err;
}
ep_index = xhci_get_endpoint_index(&ep->desc);
if (virt_dev->eps[ep_index].ring &&
virt_dev->eps[ep_index].ring->dequeue) {
ret = xhci_queue_stop_endpoint(xhci, cmd, udev->slot_id,
ep_index, 0);
if (ret)
goto err;
xhci_ring_cmd_db(xhci);
spin_unlock_irqrestore(&xhci->lock, flags);
/* Wait for stop endpoint command to finish */
wait_for_completion(cmd->completion);
if (cmd->status == COMP_COMMAND_ABORTED ||
cmd->status == COMP_STOPPED) {
xhci_warn(xhci,
"stop endpoint command timeout for ep%d%s\n",
usb_endpoint_num(&ep->desc),
usb_endpoint_dir_in(&ep->desc) ? "in" : "out");
ret = -ETIME;
}
goto free_cmd;
}
err:
spin_unlock_irqrestore(&xhci->lock, flags);
free_cmd:
xhci_free_command(xhci, cmd);
return ret;
}
static const struct hc_driver xhci_hc_driver = {
.description = "xhci-hcd",
.product_desc = "xHCI Host Controller",
@ -5375,11 +5245,6 @@ static const struct hc_driver xhci_hc_driver = {
.enable_usb3_lpm_timeout = xhci_enable_usb3_lpm_timeout,
.disable_usb3_lpm_timeout = xhci_disable_usb3_lpm_timeout,
.find_raw_port_number = xhci_find_raw_port_number,
.sec_event_ring_setup = xhci_sec_event_ring_setup,
.sec_event_ring_cleanup = xhci_sec_event_ring_cleanup,
.get_sec_event_ring_phys_addr = xhci_get_sec_event_ring_phys_addr,
.get_xfer_ring_phys_addr = xhci_get_xfer_ring_phys_addr,
.stop_endpoint = xhci_stop_endpoint,
};
void xhci_init_driver(struct hc_driver *drv,

View file

@ -1741,10 +1741,6 @@ struct xhci_hcd {
struct xhci_doorbell_array __iomem *dba;
/* Our HCD's current interrupter register set */
struct xhci_intr_reg __iomem *ir_set;
/* secondary interrupter */
struct xhci_intr_reg __iomem **sec_ir_set;
int core_id;
/* Cached register copies of read-only HC data */
__u32 hcs_params1;
@ -1788,11 +1784,6 @@ struct xhci_hcd {
struct xhci_command *current_cmd;
struct xhci_ring *event_ring;
struct xhci_erst erst;
/* secondary event ring and erst */
struct xhci_ring **sec_event_ring;
struct xhci_erst *sec_erst;
/* Scratchpad */
struct xhci_scratchpad *scratchpad;
/* Store LPM test failed devices' information */
@ -2061,8 +2052,6 @@ struct xhci_container_ctx *xhci_alloc_container_ctx(struct xhci_hcd *xhci,
int type, gfp_t flags);
void xhci_free_container_ctx(struct xhci_hcd *xhci,
struct xhci_container_ctx *ctx);
int xhci_sec_event_ring_setup(struct usb_hcd *hcd, unsigned int intr_num);
int xhci_sec_event_ring_cleanup(struct usb_hcd *hcd, unsigned int intr_num);
/* xHCI host controller glue */
typedef void (*xhci_get_quirks_t)(struct device *, struct xhci_hcd *);

View file

@ -8,8 +8,6 @@
#include <linux/file.h>
#include <linux/ktime.h>
#include <linux/mm.h>
#include <linux/workqueue.h>
#include <linux/pagemap.h>
#include <linux/lz4.h>
#include <linux/crc32.h>
@ -17,13 +15,6 @@
#include "format.h"
#include "integrity.h"
static void log_wake_up_all(struct work_struct *work)
{
struct delayed_work *dw = container_of(work, struct delayed_work, work);
struct read_log *rl = container_of(dw, struct read_log, ml_wakeup_work);
wake_up_all(&rl->ml_notif_wq);
}
struct mount_info *incfs_alloc_mount_info(struct super_block *sb,
struct mount_options *options,
struct path *backing_dir_path)
@ -36,20 +27,28 @@ struct mount_info *incfs_alloc_mount_info(struct super_block *sb,
return ERR_PTR(-ENOMEM);
mi->mi_sb = sb;
mi->mi_options = *options;
mi->mi_backing_dir_path = *backing_dir_path;
mi->mi_owner = get_current_cred();
path_get(&mi->mi_backing_dir_path);
mutex_init(&mi->mi_dir_struct_mutex);
mutex_init(&mi->mi_pending_reads_mutex);
init_waitqueue_head(&mi->mi_pending_reads_notif_wq);
init_waitqueue_head(&mi->mi_log.ml_notif_wq);
INIT_DELAYED_WORK(&mi->mi_log.ml_wakeup_work, log_wake_up_all);
spin_lock_init(&mi->mi_log.rl_lock);
INIT_LIST_HEAD(&mi->mi_reads_list_head);
error = incfs_realloc_mount_info(mi, options);
if (error)
goto err;
if (options->read_log_pages != 0) {
size_t buf_size = PAGE_SIZE * options->read_log_pages;
spin_lock_init(&mi->mi_log.rl_writer_lock);
init_waitqueue_head(&mi->mi_log.ml_notif_wq);
mi->mi_log.rl_size = buf_size / sizeof(*mi->mi_log.rl_ring_buf);
mi->mi_log.rl_ring_buf = kzalloc(buf_size, GFP_NOFS);
if (!mi->mi_log.rl_ring_buf) {
error = -ENOMEM;
goto err;
}
}
return mi;
@ -58,54 +57,11 @@ err:
return ERR_PTR(error);
}
int incfs_realloc_mount_info(struct mount_info *mi,
struct mount_options *options)
{
void *new_buffer = NULL;
void *old_buffer;
size_t new_buffer_size = 0;
if (options->read_log_pages != mi->mi_options.read_log_pages) {
struct read_log_state log_state;
/*
* Even though having two buffers allocated at once isn't
* usually good, allocating a multipage buffer under a spinlock
* is even worse, so let's optimize for the shorter lock
* duration. It's not end of the world if we fail to increase
* the buffer size anyway.
*/
if (options->read_log_pages > 0) {
new_buffer_size = PAGE_SIZE * options->read_log_pages;
new_buffer = kzalloc(new_buffer_size, GFP_NOFS);
if (!new_buffer)
return -ENOMEM;
}
spin_lock(&mi->mi_log.rl_lock);
old_buffer = mi->mi_log.rl_ring_buf;
mi->mi_log.rl_ring_buf = new_buffer;
mi->mi_log.rl_size = new_buffer_size;
log_state = (struct read_log_state){
.generation_id = mi->mi_log.rl_head.generation_id + 1,
};
mi->mi_log.rl_head = log_state;
mi->mi_log.rl_tail = log_state;
spin_unlock(&mi->mi_log.rl_lock);
kfree(old_buffer);
}
mi->mi_options = *options;
return 0;
}
void incfs_free_mount_info(struct mount_info *mi)
{
if (!mi)
return;
flush_delayed_work(&mi->mi_log.ml_wakeup_work);
dput(mi->mi_index_dir);
path_put(&mi->mi_backing_dir_path);
mutex_destroy(&mi->mi_dir_struct_mutex);
@ -260,136 +216,37 @@ static ssize_t decompress(struct mem_range src, struct mem_range dst)
return result;
}
static void log_read_one_record(struct read_log *rl, struct read_log_state *rs)
{
union log_record *record =
(union log_record *)((u8 *)rl->rl_ring_buf + rs->next_offset);
size_t record_size;
switch (record->full_record.type) {
case FULL:
rs->base_record = record->full_record;
record_size = sizeof(record->full_record);
break;
case SAME_FILE:
rs->base_record.block_index =
record->same_file_record.block_index;
rs->base_record.absolute_ts_us +=
record->same_file_record.relative_ts_us;
record_size = sizeof(record->same_file_record);
break;
case SAME_FILE_NEXT_BLOCK:
++rs->base_record.block_index;
rs->base_record.absolute_ts_us +=
record->same_file_next_block.relative_ts_us;
record_size = sizeof(record->same_file_next_block);
break;
case SAME_FILE_NEXT_BLOCK_SHORT:
++rs->base_record.block_index;
rs->base_record.absolute_ts_us +=
record->same_file_next_block_short.relative_ts_us;
record_size = sizeof(record->same_file_next_block_short);
break;
}
rs->next_offset += record_size;
if (rs->next_offset > rl->rl_size - sizeof(*record)) {
rs->next_offset = 0;
++rs->current_pass_no;
}
++rs->current_record_no;
}
static void log_block_read(struct mount_info *mi, incfs_uuid_t *id,
int block_index)
int block_index, bool timed_out)
{
struct read_log *log = &mi->mi_log;
struct read_log_state *head, *tail;
s64 now_us;
s64 relative_us;
union log_record record;
size_t record_size;
struct read_log_state state;
s64 now_us = ktime_to_us(ktime_get());
struct read_log_record record = {
.file_id = *id,
.block_index = block_index,
.timed_out = timed_out,
.timestamp_us = now_us
};
/*
* This may read the old value, but it's OK to delay the logging start
* right after the configuration update.
*/
if (READ_ONCE(log->rl_size) == 0)
if (log->rl_size == 0)
return;
now_us = ktime_to_us(ktime_get());
spin_lock(&log->rl_lock);
if (log->rl_size == 0) {
spin_unlock(&log->rl_lock);
return;
spin_lock(&log->rl_writer_lock);
state = READ_ONCE(log->rl_state);
log->rl_ring_buf[state.next_index] = record;
if (++state.next_index == log->rl_size) {
state.next_index = 0;
++state.current_pass_no;
}
WRITE_ONCE(log->rl_state, state);
spin_unlock(&log->rl_writer_lock);
head = &log->rl_head;
tail = &log->rl_tail;
relative_us = now_us - head->base_record.absolute_ts_us;
if (memcmp(id, &head->base_record.file_id, sizeof(incfs_uuid_t)) ||
relative_us >= 1ll << 32) {
record.full_record = (struct full_record){
.type = FULL,
.block_index = block_index,
.file_id = *id,
.absolute_ts_us = now_us,
};
head->base_record.file_id = *id;
record_size = sizeof(struct full_record);
} else if (block_index != head->base_record.block_index + 1 ||
relative_us >= 1 << 30) {
record.same_file_record = (struct same_file_record){
.type = SAME_FILE,
.block_index = block_index,
.relative_ts_us = relative_us,
};
record_size = sizeof(struct same_file_record);
} else if (relative_us >= 1 << 14) {
record.same_file_next_block = (struct same_file_next_block){
.type = SAME_FILE_NEXT_BLOCK,
.relative_ts_us = relative_us,
};
record_size = sizeof(struct same_file_next_block);
} else {
record.same_file_next_block_short =
(struct same_file_next_block_short){
.type = SAME_FILE_NEXT_BLOCK_SHORT,
.relative_ts_us = relative_us,
};
record_size = sizeof(struct same_file_next_block_short);
}
head->base_record.block_index = block_index;
head->base_record.absolute_ts_us = now_us;
/* Advance tail beyond area we are going to overwrite */
while (tail->current_pass_no < head->current_pass_no &&
tail->next_offset < head->next_offset + record_size)
log_read_one_record(log, tail);
memcpy(((u8 *)log->rl_ring_buf) + head->next_offset, &record,
record_size);
head->next_offset += record_size;
if (head->next_offset > log->rl_size - sizeof(record)) {
head->next_offset = 0;
++head->current_pass_no;
}
++head->current_record_no;
spin_unlock(&log->rl_lock);
if (schedule_delayed_work(&log->ml_wakeup_work, msecs_to_jiffies(16)))
pr_debug("incfs: scheduled a log pollers wakeup");
wake_up_all(&log->ml_notif_wq);
}
static int validate_hash_tree(struct file *bf, struct data_file *df,
int block_index, struct mem_range data,
u8 *tmp_buf)
int block_index, struct mem_range data, u8 *buf)
{
u8 digest[INCFS_MAX_HASH_SIZE] = {};
struct mtree *tree = NULL;
@ -402,7 +259,6 @@ static int validate_hash_tree(struct file *bf, struct data_file *df,
int hash_per_block;
int lvl = 0;
int res;
struct page *saved_page = NULL;
tree = df->df_hash_tree;
sig = df->df_signature;
@ -424,39 +280,17 @@ static int validate_hash_tree(struct file *bf, struct data_file *df,
INCFS_DATA_FILE_BLOCK_SIZE);
size_t hash_off_in_block = hash_block_index * digest_size
% INCFS_DATA_FILE_BLOCK_SIZE;
struct mem_range buf_range;
struct page *page = NULL;
bool aligned = (hash_block_off &
(INCFS_DATA_FILE_BLOCK_SIZE - 1)) == 0;
u8 *actual_buf;
struct mem_range buf_range = range(buf,
INCFS_DATA_FILE_BLOCK_SIZE);
ssize_t read_res = incfs_kread(bf, buf,
INCFS_DATA_FILE_BLOCK_SIZE, hash_block_off);
if (aligned) {
page = read_mapping_page(
bf->f_inode->i_mapping,
hash_block_off / INCFS_DATA_FILE_BLOCK_SIZE,
NULL);
if (read_res < 0)
return read_res;
if (read_res != INCFS_DATA_FILE_BLOCK_SIZE)
return -EIO;
if (IS_ERR(page))
return PTR_ERR(page);
actual_buf = page_address(page);
} else {
size_t read_res =
incfs_kread(bf, tmp_buf,
INCFS_DATA_FILE_BLOCK_SIZE,
hash_block_off);
if (read_res < 0)
return read_res;
if (read_res != INCFS_DATA_FILE_BLOCK_SIZE)
return -EIO;
actual_buf = tmp_buf;
}
buf_range = range(actual_buf, INCFS_DATA_FILE_BLOCK_SIZE);
saved_digest_rng =
range(actual_buf + hash_off_in_block, digest_size);
saved_digest_rng = range(buf + hash_off_in_block, digest_size);
if (!incfs_equal_ranges(calc_digest_rng, saved_digest_rng)) {
int i;
bool zero = true;
@ -471,37 +305,9 @@ static int validate_hash_tree(struct file *bf, struct data_file *df,
if (zero)
pr_debug("incfs: Note saved_digest all zero - did you forget to load the hashes?\n");
if (saved_page)
put_page(saved_page);
if (page)
put_page(page);
return -EBADMSG;
}
if (saved_page) {
/*
* This is something of a kludge. The PageChecked flag
* is reserved for the file system, but we are setting
* this on the pages belonging to the underlying file
* system. incfs is only going to be used on f2fs and
* ext4 which only use this flag when fs-verity is being
* used, so this is safe for now, however a better
* mechanism needs to be found.
*/
SetPageChecked(saved_page);
put_page(saved_page);
saved_page = NULL;
}
if (page && PageChecked(page)) {
put_page(page);
return 0;
}
saved_page = page;
page = NULL;
res = incfs_calc_digest(tree->alg, buf_range, calc_digest_rng);
if (res)
return res;
@ -511,15 +317,8 @@ static int validate_hash_tree(struct file *bf, struct data_file *df,
root_hash_rng = range(tree->root_hash, digest_size);
if (!incfs_equal_ranges(calc_digest_rng, root_hash_rng)) {
pr_debug("incfs: Root hash mismatch blk:%d\n", block_index);
if (saved_page)
put_page(saved_page);
return -EBADMSG;
}
if (saved_page) {
SetPageChecked(saved_page);
put_page(saved_page);
}
return 0;
}
@ -537,28 +336,13 @@ static bool is_data_block_present(struct data_file_block *block)
(block->db_stored_size != 0);
}
static void convert_data_file_block(struct incfs_blockmap_entry *bme,
struct data_file_block *res_block)
{
u16 flags = le16_to_cpu(bme->me_flags);
res_block->db_backing_file_data_offset =
le16_to_cpu(bme->me_data_offset_hi);
res_block->db_backing_file_data_offset <<= 32;
res_block->db_backing_file_data_offset |=
le32_to_cpu(bme->me_data_offset_lo);
res_block->db_stored_size = le16_to_cpu(bme->me_data_size);
res_block->db_comp_alg = (flags & INCFS_BLOCK_COMPRESSED_LZ4) ?
COMPRESSION_LZ4 :
COMPRESSION_NONE;
}
static int get_data_file_block(struct data_file *df, int index,
struct data_file_block *res_block)
{
struct incfs_blockmap_entry bme = {};
struct backing_file_context *bfc = NULL;
loff_t blockmap_off = 0;
u16 flags = 0;
int error = 0;
if (!df || !res_block)
@ -574,81 +358,48 @@ static int get_data_file_block(struct data_file *df, int index,
if (error)
return error;
convert_data_file_block(&bme, res_block);
return 0;
}
static int check_room_for_one_range(u32 size, u32 size_out)
{
if (size_out + sizeof(struct incfs_filled_range) > size)
return -ERANGE;
flags = le16_to_cpu(bme.me_flags);
res_block->db_backing_file_data_offset =
le16_to_cpu(bme.me_data_offset_hi);
res_block->db_backing_file_data_offset <<= 32;
res_block->db_backing_file_data_offset |=
le32_to_cpu(bme.me_data_offset_lo);
res_block->db_stored_size = le16_to_cpu(bme.me_data_size);
res_block->db_comp_alg = (flags & INCFS_BLOCK_COMPRESSED_LZ4) ?
COMPRESSION_LZ4 :
COMPRESSION_NONE;
return 0;
}
static int copy_one_range(struct incfs_filled_range *range, void __user *buffer,
u32 size, u32 *size_out)
{
int error = check_room_for_one_range(size, *size_out);
if (error)
return error;
if (*size_out + sizeof(*range) > size)
return -ERANGE;
if (copy_to_user(((char __user *)buffer) + *size_out, range,
sizeof(*range)))
if (copy_to_user(((char *)buffer) + *size_out, range, sizeof(*range)))
return -EFAULT;
*size_out += sizeof(*range);
return 0;
}
static int update_file_header_flags(struct data_file *df, u32 bits_to_reset,
u32 bits_to_set)
{
int result;
u32 new_flags;
struct backing_file_context *bfc;
if (!df)
return -EFAULT;
bfc = df->df_backing_file_context;
if (!bfc)
return -EFAULT;
result = mutex_lock_interruptible(&bfc->bc_mutex);
if (result)
return result;
new_flags = (df->df_header_flags & ~bits_to_reset) | bits_to_set;
if (new_flags != df->df_header_flags) {
df->df_header_flags = new_flags;
result = incfs_write_file_header_flags(bfc, new_flags);
}
mutex_unlock(&bfc->bc_mutex);
return result;
}
#define READ_BLOCKMAP_ENTRIES 512
int incfs_get_filled_blocks(struct data_file *df,
struct incfs_get_filled_blocks_args *arg)
{
int error = 0;
bool in_range = false;
struct incfs_filled_range range;
void __user *buffer = u64_to_user_ptr(arg->range_buffer);
void *buffer = u64_to_user_ptr(arg->range_buffer);
u32 size = arg->range_buffer_size;
u32 end_index =
arg->end_index ? arg->end_index : df->df_total_block_count;
u32 *size_out = &arg->range_buffer_size_out;
int i = READ_BLOCKMAP_ENTRIES - 1;
int entries_read = 0;
struct incfs_blockmap_entry *bme;
*size_out = 0;
if (end_index > df->df_total_block_count)
end_index = df->df_total_block_count;
arg->total_blocks_out = df->df_total_block_count;
arg->data_blocks_out = df->df_data_block_count;
if (df->df_header_flags & INCFS_FILE_COMPLETE) {
pr_debug("File marked full, fast get_filled_blocks");
@ -656,71 +407,35 @@ int incfs_get_filled_blocks(struct data_file *df,
arg->index_out = arg->start_index;
return 0;
}
arg->index_out = arg->start_index;
error = check_room_for_one_range(size, *size_out);
if (error)
return error;
range = (struct incfs_filled_range){
.begin = arg->start_index,
.end = end_index,
};
error = copy_one_range(&range, buffer, size, size_out);
if (error)
return error;
arg->index_out = end_index;
return 0;
return copy_one_range(&range, buffer, size, size_out);
}
bme = kzalloc(sizeof(*bme) * READ_BLOCKMAP_ENTRIES,
GFP_NOFS | __GFP_COMP);
if (!bme)
return -ENOMEM;
for (arg->index_out = arg->start_index; arg->index_out < end_index;
++arg->index_out) {
struct data_file_block dfb;
if (++i == READ_BLOCKMAP_ENTRIES) {
entries_read = incfs_read_blockmap_entries(
df->df_backing_file_context, bme,
arg->index_out, READ_BLOCKMAP_ENTRIES,
df->df_blockmap_off);
if (entries_read < 0) {
error = entries_read;
break;
}
i = 0;
}
if (i >= entries_read) {
error = -EIO;
error = get_data_file_block(df, arg->index_out, &dfb);
if (error)
break;
}
convert_data_file_block(bme + i, &dfb);
if (is_data_block_present(&dfb) == in_range)
continue;
if (!in_range) {
error = check_room_for_one_range(size, *size_out);
if (error)
break;
in_range = true;
range.begin = arg->index_out;
} else {
range.end = arg->index_out;
error = copy_one_range(&range, buffer, size, size_out);
if (error) {
/* there will be another try out of the loop,
* it will reset the index_out if it fails too
*/
if (error)
break;
}
in_range = false;
}
}
@ -728,20 +443,21 @@ int incfs_get_filled_blocks(struct data_file *df,
if (in_range) {
range.end = arg->index_out;
error = copy_one_range(&range, buffer, size, size_out);
if (error)
arg->index_out = range.begin;
}
if (!error && in_range && arg->start_index == 0 &&
end_index == df->df_total_block_count &&
*size_out == sizeof(struct incfs_filled_range)) {
int result =
update_file_header_flags(df, 0, INCFS_FILE_COMPLETE);
int result;
df->df_header_flags |= INCFS_FILE_COMPLETE;
result = incfs_update_file_header_flags(
df->df_backing_file_context, df->df_header_flags);
/* Log failure only, since it's just a failed optimization */
pr_debug("Marked file full with result %d", result);
}
kfree(bme);
return error;
}
@ -875,7 +591,8 @@ static int wait_for_data_block(struct data_file *df, int block_index,
mi = df->df_mount_info;
if (timeout_ms == 0) {
log_block_read(mi, &df->df_id, block_index);
log_block_read(mi, &df->df_id, block_index,
true /*timed out*/);
return -ETIME;
}
@ -894,7 +611,8 @@ static int wait_for_data_block(struct data_file *df, int block_index,
if (wait_res == 0) {
/* Wait has timed out */
log_block_read(mi, &df->df_id, block_index);
log_block_read(mi, &df->df_id, block_index,
true /*timed out*/);
return -ETIME;
}
if (wait_res < 0) {
@ -990,7 +708,7 @@ ssize_t incfs_read_data_file_block(struct mem_range dst, struct data_file *df,
}
if (result >= 0)
log_block_read(mi, &df->df_id, index);
log_block_read(mi, &df->df_id, index, false /*timed out*/);
out:
return result;
@ -1360,29 +1078,36 @@ struct read_log_state incfs_get_log_state(struct mount_info *mi)
struct read_log *log = &mi->mi_log;
struct read_log_state result;
spin_lock(&log->rl_lock);
result = log->rl_head;
spin_unlock(&log->rl_lock);
spin_lock(&log->rl_writer_lock);
result = READ_ONCE(log->rl_state);
spin_unlock(&log->rl_writer_lock);
return result;
}
static u64 calc_record_count(const struct read_log_state *state, int rl_size)
{
return state->current_pass_no * (u64)rl_size + state->next_index;
}
int incfs_get_uncollected_logs_count(struct mount_info *mi,
const struct read_log_state *state)
struct read_log_state state)
{
struct read_log *log = &mi->mi_log;
u32 generation;
u64 head_no, tail_no;
spin_lock(&log->rl_lock);
tail_no = log->rl_tail.current_record_no;
head_no = log->rl_head.current_record_no;
generation = log->rl_head.generation_id;
spin_unlock(&log->rl_lock);
u64 count = calc_record_count(&log->rl_state, log->rl_size) -
calc_record_count(&state, log->rl_size);
return min_t(int, count, log->rl_size);
}
if (generation != state->generation_id)
return head_no - tail_no;
else
return head_no - max_t(u64, tail_no, state->current_record_no);
static void fill_pending_read_from_log_record(
struct incfs_pending_read_info *dest, const struct read_log_record *src,
struct read_log_state *state, u64 log_size)
{
dest->file_id = src->file_id;
dest->block_index = src->block_index;
dest->serial_number =
state->current_pass_no * log_size + state->next_index;
dest->timestamp_us = src->timestamp_us;
}
int incfs_collect_logged_reads(struct mount_info *mi,
@ -1390,47 +1115,58 @@ int incfs_collect_logged_reads(struct mount_info *mi,
struct incfs_pending_read_info *reads,
int reads_size)
{
int dst_idx;
struct read_log *log = &mi->mi_log;
struct read_log_state *head, *tail;
struct read_log_state live_state = incfs_get_log_state(mi);
u64 read_count = calc_record_count(reader_state, log->rl_size);
u64 written_count = calc_record_count(&live_state, log->rl_size);
int dst_idx;
spin_lock(&log->rl_lock);
head = &log->rl_head;
tail = &log->rl_tail;
if (reader_state->next_index >= log->rl_size ||
read_count > written_count)
return -ERANGE;
if (reader_state->generation_id != head->generation_id) {
pr_debug("read ptr is wrong generation: %u/%u",
reader_state->generation_id, head->generation_id);
if (read_count == written_count)
return 0;
*reader_state = (struct read_log_state){
.generation_id = head->generation_id,
};
if (read_count > written_count) {
/* This reader is somehow ahead of the writer. */
pr_debug("incfs: Log reader is ahead of writer\n");
*reader_state = live_state;
}
if (reader_state->current_record_no < tail->current_record_no) {
pr_debug("read ptr is behind, moving: %u/%u -> %u/%u\n",
(u32)reader_state->next_offset,
(u32)reader_state->current_pass_no,
(u32)tail->next_offset, (u32)tail->current_pass_no);
if (written_count - read_count > log->rl_size) {
/*
* Reading pointer is too far behind,
* start from the record following the write pointer.
*/
pr_debug("incfs: read pointer is behind, moving: %u/%u -> %u/%u / %u\n",
(u32)reader_state->next_index,
(u32)reader_state->current_pass_no,
(u32)live_state.next_index,
(u32)live_state.current_pass_no - 1, (u32)log->rl_size);
*reader_state = *tail;
*reader_state = (struct read_log_state){
.next_index = live_state.next_index,
.current_pass_no = live_state.current_pass_no - 1,
};
}
for (dst_idx = 0; dst_idx < reads_size; dst_idx++) {
if (reader_state->current_record_no == head->current_record_no)
if (reader_state->next_index == live_state.next_index &&
reader_state->current_pass_no == live_state.current_pass_no)
break;
log_read_one_record(log, reader_state);
fill_pending_read_from_log_record(
&reads[dst_idx],
&log->rl_ring_buf[reader_state->next_index],
reader_state, log->rl_size);
reads[dst_idx] = (struct incfs_pending_read_info){
.file_id = reader_state->base_record.file_id,
.block_index = reader_state->base_record.block_index,
.serial_number = reader_state->current_record_no,
.timestamp_us = reader_state->base_record.absolute_ts_us
};
reader_state->next_index++;
if (reader_state->next_index == log->rl_size) {
reader_state->next_index = 0;
reader_state->current_pass_no++;
}
}
spin_unlock(&log->rl_lock);
return dst_idx;
}

View file

@ -20,78 +20,38 @@
#define SEGMENTS_PER_FILE 3
enum LOG_RECORD_TYPE {
FULL,
SAME_FILE,
SAME_FILE_NEXT_BLOCK,
SAME_FILE_NEXT_BLOCK_SHORT,
};
struct read_log_record {
u32 block_index : 31;
u32 timed_out : 1;
u64 timestamp_us;
struct full_record {
enum LOG_RECORD_TYPE type : 2; /* FULL */
u32 block_index : 30;
incfs_uuid_t file_id;
u64 absolute_ts_us;
} __packed; /* 28 bytes */
struct same_file_record {
enum LOG_RECORD_TYPE type : 2; /* SAME_FILE */
u32 block_index : 30;
u32 relative_ts_us; /* max 2^32 us ~= 1 hour (1:11:30) */
} __packed; /* 12 bytes */
struct same_file_next_block {
enum LOG_RECORD_TYPE type : 2; /* SAME_FILE_NEXT_BLOCK */
u32 relative_ts_us : 30; /* max 2^30 us ~= 15 min (17:50) */
} __packed; /* 4 bytes */
struct same_file_next_block_short {
enum LOG_RECORD_TYPE type : 2; /* SAME_FILE_NEXT_BLOCK_SHORT */
u16 relative_ts_us : 14; /* max 2^14 us ~= 16 ms */
} __packed; /* 2 bytes */
union log_record {
struct full_record full_record;
struct same_file_record same_file_record;
struct same_file_next_block same_file_next_block;
struct same_file_next_block_short same_file_next_block_short;
};
} __packed;
struct read_log_state {
/* Log buffer generation id, incremented on configuration changes */
u32 generation_id;
/* Next slot in rl_ring_buf to write to. */
u32 next_index;
/* Offset in rl_ring_buf to write into. */
u32 next_offset;
/* Current number of writer passes over rl_ring_buf */
/* Current number of writer pass over rl_ring_buf */
u32 current_pass_no;
/* Current full_record to diff against */
struct full_record base_record;
/* Current record number counting from configuration change */
u64 current_record_no;
};
/* A ring buffer to save records about data blocks which were recently read. */
struct read_log {
void *rl_ring_buf;
struct read_log_record *rl_ring_buf;
struct read_log_state rl_state;
spinlock_t rl_writer_lock;
int rl_size;
struct read_log_state rl_head;
struct read_log_state rl_tail;
/* A lock to protect the above fields */
spinlock_t rl_lock;
/* A queue of waiters who want to be notified about reads */
/*
* A queue of waiters who want to be notified about reads.
*/
wait_queue_head_t ml_notif_wq;
/* A work item to wake up those waiters without slowing down readers */
struct delayed_work ml_wakeup_work;
};
struct mount_options {
@ -263,9 +223,6 @@ struct mount_info *incfs_alloc_mount_info(struct super_block *sb,
struct mount_options *options,
struct path *backing_dir_path);
int incfs_realloc_mount_info(struct mount_info *mi,
struct mount_options *options);
void incfs_free_mount_info(struct mount_info *mi);
struct data_file *incfs_open_data_file(struct mount_info *mi, struct file *bf);
@ -308,7 +265,7 @@ int incfs_collect_logged_reads(struct mount_info *mi,
int reads_size);
struct read_log_state incfs_get_log_state(struct mount_info *mi);
int incfs_get_uncollected_logs_count(struct mount_info *mi,
const struct read_log_state *state);
struct read_log_state state);
static inline struct inode_info *get_incfs_node(struct inode *inode)
{

View file

@ -94,6 +94,7 @@ static int append_zeros(struct backing_file_context *bfc, size_t len)
{
loff_t file_size = 0;
loff_t new_last_byte_offset = 0;
int res = 0;
if (!bfc)
return -EFAULT;
@ -110,18 +111,28 @@ static int append_zeros(struct backing_file_context *bfc, size_t len)
*/
file_size = incfs_get_end_offset(bfc->bc_file);
new_last_byte_offset = file_size + len - 1;
return vfs_fallocate(bfc->bc_file, 0, new_last_byte_offset, 1);
res = vfs_fallocate(bfc->bc_file, 0, new_last_byte_offset, 1);
if (res)
return res;
res = vfs_fsync_range(bfc->bc_file, file_size, file_size + len, 1);
return res;
}
static int write_to_bf(struct backing_file_context *bfc, const void *buf,
size_t count, loff_t pos)
size_t count, loff_t pos, bool sync)
{
ssize_t res = incfs_kwrite(bfc->bc_file, buf, count, pos);
ssize_t res = 0;
res = incfs_kwrite(bfc->bc_file, buf, count, pos);
if (res < 0)
return res;
if (res != count)
return -EIO;
if (sync)
return vfs_fsync_range(bfc->bc_file, pos, pos + count, 1);
return 0;
}
@ -175,7 +186,7 @@ static int append_md_to_backing_file(struct backing_file_context *bfc,
/* Write the metadata record to the end of the backing file */
record_offset = file_pos;
new_md_offset = cpu_to_le64(record_offset);
result = write_to_bf(bfc, record, record_size, file_pos);
result = write_to_bf(bfc, record, record_size, file_pos, true);
if (result)
return result;
@ -196,7 +207,7 @@ static int append_md_to_backing_file(struct backing_file_context *bfc,
fh_first_md_offset);
}
result = write_to_bf(bfc, &new_md_offset, sizeof(new_md_offset),
file_pos);
file_pos, true);
if (result)
return result;
@ -204,14 +215,15 @@ static int append_md_to_backing_file(struct backing_file_context *bfc,
return result;
}
int incfs_write_file_header_flags(struct backing_file_context *bfc, u32 flags)
int incfs_update_file_header_flags(struct backing_file_context *bfc, u32 flags)
{
if (!bfc)
return -EFAULT;
return write_to_bf(bfc, &flags, sizeof(flags),
offsetof(struct incfs_file_header,
fh_file_header_flags));
fh_file_header_flags),
false);
}
/*
@ -280,7 +292,7 @@ int incfs_write_file_attr_to_backing_file(struct backing_file_context *bfc,
file_attr.fa_offset = cpu_to_le64(value_offset);
file_attr.fa_crc = cpu_to_le32(crc);
result = write_to_bf(bfc, value.data, value.len, value_offset);
result = write_to_bf(bfc, value.data, value.len, value_offset, true);
if (result)
return result;
@ -320,7 +332,7 @@ int incfs_write_signature_to_backing_file(struct backing_file_context *bfc,
sg.sg_sig_size = cpu_to_le32(sig.len);
sg.sg_sig_offset = cpu_to_le64(pos);
result = write_to_bf(bfc, sig.data, sig.len, pos);
result = write_to_bf(bfc, sig.data, sig.len, pos, false);
if (result)
goto err;
}
@ -353,9 +365,10 @@ int incfs_write_signature_to_backing_file(struct backing_file_context *bfc,
/* Write a hash tree metadata record pointing to the hash tree above. */
result = append_md_to_backing_file(bfc, &sg.sg_header);
err:
if (result)
if (result) {
/* Error, rollback file changes */
truncate_backing_file(bfc, rollback_pos);
}
return result;
}
@ -389,7 +402,7 @@ int incfs_write_fh_to_backing_file(struct backing_file_context *bfc,
if (file_pos != 0)
return -EEXIST;
return write_to_bf(bfc, &fh, sizeof(fh), file_pos);
return write_to_bf(bfc, &fh, sizeof(fh), file_pos, true);
}
/* Write a given data block and update file's blockmap to point it. */
@ -418,7 +431,7 @@ int incfs_write_data_block_to_backing_file(struct backing_file_context *bfc,
}
/* Write the block data at the end of the backing file. */
result = write_to_bf(bfc, block.data, block.len, data_offset);
result = write_to_bf(bfc, block.data, block.len, data_offset, false);
if (result)
return result;
@ -428,16 +441,16 @@ int incfs_write_data_block_to_backing_file(struct backing_file_context *bfc,
bm_entry.me_data_size = cpu_to_le16((u16)block.len);
bm_entry.me_flags = cpu_to_le16(flags);
return write_to_bf(bfc, &bm_entry, sizeof(bm_entry),
bm_entry_off);
result = write_to_bf(bfc, &bm_entry, sizeof(bm_entry),
bm_entry_off, false);
return result;
}
int incfs_write_hash_block_to_backing_file(struct backing_file_context *bfc,
struct mem_range block,
int block_index,
loff_t hash_area_off,
loff_t bm_base_off,
loff_t file_size)
loff_t bm_base_off, int file_size)
{
struct incfs_blockmap_entry bm_entry = {};
int result;
@ -460,7 +473,7 @@ int incfs_write_hash_block_to_backing_file(struct backing_file_context *bfc,
return -EINVAL;
}
result = write_to_bf(bfc, block.data, block.len, data_offset);
result = write_to_bf(bfc, block.data, block.len, data_offset, false);
if (result)
return result;
@ -469,7 +482,8 @@ int incfs_write_hash_block_to_backing_file(struct backing_file_context *bfc,
bm_entry.me_data_size = cpu_to_le16(INCFS_DATA_FILE_BLOCK_SIZE);
bm_entry.me_flags = cpu_to_le16(INCFS_BLOCK_HASH);
return write_to_bf(bfc, &bm_entry, sizeof(bm_entry), bm_entry_off);
return write_to_bf(bfc, &bm_entry, sizeof(bm_entry), bm_entry_off,
false);
}
/* Initialize a new image in a given backing file. */
@ -499,19 +513,8 @@ int incfs_read_blockmap_entry(struct backing_file_context *bfc, int block_index,
loff_t bm_base_off,
struct incfs_blockmap_entry *bm_entry)
{
int error = incfs_read_blockmap_entries(bfc, bm_entry, block_index, 1,
bm_base_off);
if (error < 0)
return error;
if (error == 0)
return -EIO;
if (error != 1)
return -EFAULT;
return 0;
return incfs_read_blockmap_entries(bfc, bm_entry, block_index, 1,
bm_base_off);
}
int incfs_read_blockmap_entries(struct backing_file_context *bfc,
@ -535,7 +538,9 @@ int incfs_read_blockmap_entries(struct backing_file_context *bfc,
bm_entry_off);
if (result < 0)
return result;
return result / sizeof(*entries);
if (result < bytes_to_read)
return -EIO;
return 0;
}
int incfs_read_file_header(struct backing_file_context *bfc,

View file

@ -303,8 +303,7 @@ int incfs_write_hash_block_to_backing_file(struct backing_file_context *bfc,
struct mem_range block,
int block_index,
loff_t hash_area_off,
loff_t bm_base_off,
loff_t file_size);
loff_t bm_base_off, int file_size);
int incfs_write_file_attr_to_backing_file(struct backing_file_context *bfc,
struct mem_range value, struct incfs_file_attr *attr);
@ -312,7 +311,7 @@ int incfs_write_file_attr_to_backing_file(struct backing_file_context *bfc,
int incfs_write_signature_to_backing_file(struct backing_file_context *bfc,
struct mem_range sig, u32 tree_size);
int incfs_write_file_header_flags(struct backing_file_context *bfc, u32 flags);
int incfs_update_file_header_flags(struct backing_file_context *bfc, u32 flags);
int incfs_make_empty_backing_file(struct backing_file_context *bfc,
incfs_uuid_t *uuid, u64 file_size);

View file

@ -62,7 +62,7 @@ static bool read_u32(u8 **p, u8 *top, u32 *result)
if (*p + sizeof(u32) > top)
return false;
*result = le32_to_cpu(*(__le32 *)*p);
*result = le32_to_cpu(*(u32 *)*p);
*p += sizeof(u32);
return true;
}

View file

@ -68,7 +68,6 @@ static struct inode *alloc_inode(struct super_block *sb);
static void free_inode(struct inode *inode);
static void evict_inode(struct inode *inode);
static int incfs_setattr(struct dentry *dentry, struct iattr *ia);
static ssize_t incfs_getxattr(struct dentry *d, const char *name,
void *value, size_t size);
static ssize_t incfs_setxattr(struct dentry *d, const char *name,
@ -99,8 +98,7 @@ static const struct inode_operations incfs_dir_inode_ops = {
.rename = dir_rename_wrap,
.unlink = dir_unlink,
.link = dir_link,
.rmdir = dir_rmdir,
.setattr = incfs_setattr,
.rmdir = dir_rmdir
};
static const struct file_operations incfs_dir_fops = {
@ -160,7 +158,7 @@ static const struct file_operations incfs_log_file_ops = {
};
static const struct inode_operations incfs_file_inode_ops = {
.setattr = incfs_setattr,
.setattr = simple_setattr,
.getattr = simple_getattr,
.listxattr = incfs_listxattr
};
@ -206,8 +204,6 @@ struct inode_search {
unsigned long ino;
struct dentry *backing_dentry;
size_t size;
};
enum parse_parameter {
@ -366,14 +362,13 @@ static int inode_set(struct inode *inode, void *opaque)
fsstack_copy_attr_all(inode, backing_inode);
if (S_ISREG(inode->i_mode)) {
u64 size = search->size;
u64 size = read_size_attr(backing_dentry);
inode->i_size = size;
inode->i_blocks = get_blocks_count_for_size(size);
inode->i_mapping->a_ops = &incfs_address_space_ops;
inode->i_op = &incfs_file_inode_ops;
inode->i_fop = &incfs_file_ops;
inode->i_mode &= ~0222;
} else if (S_ISDIR(inode->i_mode)) {
inode->i_size = 0;
inode->i_blocks = 1;
@ -442,8 +437,7 @@ static struct inode *fetch_regular_inode(struct super_block *sb,
struct inode *backing_inode = d_inode(backing_dentry);
struct inode_search search = {
.ino = backing_inode->i_ino,
.backing_dentry = backing_dentry,
.size = read_size_attr(backing_dentry),
.backing_dentry = backing_dentry
};
struct inode *inode = iget5_locked(sb, search.ino, inode_test,
inode_set, &search);
@ -587,27 +581,22 @@ static ssize_t log_read(struct file *f, char __user *buf, size_t len,
{
struct log_file_state *log_state = f->private_data;
struct mount_info *mi = get_mount_info(file_superblock(f));
struct incfs_pending_read_info *reads_buf =
(struct incfs_pending_read_info *)__get_free_page(GFP_NOFS);
size_t reads_to_collect = len / sizeof(*reads_buf);
size_t reads_per_page = PAGE_SIZE / sizeof(*reads_buf);
int total_reads_collected = 0;
int rl_size;
ssize_t result = 0;
struct incfs_pending_read_info *reads_buf;
ssize_t reads_to_collect = len / sizeof(*reads_buf);
ssize_t reads_per_page = PAGE_SIZE / sizeof(*reads_buf);
rl_size = READ_ONCE(mi->mi_log.rl_size);
if (rl_size == 0)
return 0;
reads_buf = (struct incfs_pending_read_info *)__get_free_page(GFP_NOFS);
if (!reads_buf)
return -ENOMEM;
reads_to_collect = min_t(ssize_t, rl_size, reads_to_collect);
reads_to_collect = min_t(size_t, mi->mi_log.rl_size, reads_to_collect);
while (reads_to_collect > 0) {
struct read_log_state next_state = READ_ONCE(log_state->state);
int reads_collected = incfs_collect_logged_reads(
mi, &next_state, reads_buf,
min_t(ssize_t, reads_to_collect, reads_per_page));
min_t(size_t, reads_to_collect, reads_per_page));
if (reads_collected <= 0) {
result = total_reads_collected ?
total_reads_collected *
@ -646,7 +635,7 @@ static __poll_t log_poll(struct file *file, poll_table *wait)
__poll_t ret = 0;
poll_wait(file, &mi->mi_log.ml_notif_wq, wait);
count = incfs_get_uncollected_logs_count(mi, &log_state->state);
count = incfs_get_uncollected_logs_count(mi, log_state->state);
if (count >= mi->mi_options.read_log_wakeup_count)
ret = EPOLLIN | EPOLLRDNORM;
@ -864,7 +853,7 @@ static struct mem_range incfs_copy_signature_info_from_user(u8 __user *original,
if (size > INCFS_MAX_SIGNATURE_SIZE)
return range(ERR_PTR(-EFAULT), 0);
result = kzalloc(size, GFP_NOFS | __GFP_COMP);
result = kzalloc(size, GFP_NOFS);
if (!result)
return range(ERR_PTR(-ENOMEM), 0);
@ -896,8 +885,7 @@ static int init_new_file(struct mount_info *mi, struct dentry *dentry,
.mnt = mi->mi_backing_dir_path.mnt,
.dentry = dentry
};
new_file = dentry_open(&path, O_RDWR | O_NOATIME | O_LARGEFILE,
mi->mi_owner);
new_file = dentry_open(&path, O_RDWR | O_NOATIME, mi->mi_owner);
if (IS_ERR(new_file)) {
error = PTR_ERR(new_file);
@ -1286,7 +1274,7 @@ static long ioctl_fill_blocks(struct file *f, void __user *arg)
{
struct incfs_fill_blocks __user *usr_fill_blocks = arg;
struct incfs_fill_blocks fill_blocks;
struct incfs_fill_block __user *usr_fill_block_array;
struct incfs_fill_block *usr_fill_block_array;
struct data_file *df = get_incfs_data_file(f);
const ssize_t data_buf_size = 2 * INCFS_DATA_FILE_BLOCK_SIZE;
u8 *data_buf = NULL;
@ -1303,8 +1291,7 @@ static long ioctl_fill_blocks(struct file *f, void __user *arg)
return -EFAULT;
usr_fill_block_array = u64_to_user_ptr(fill_blocks.fill_blocks);
data_buf = (u8 *)__get_free_pages(GFP_NOFS | __GFP_COMP,
get_order(data_buf_size));
data_buf = (u8 *)__get_free_pages(GFP_NOFS, get_order(data_buf_size));
if (!data_buf)
return -ENOMEM;
@ -1357,7 +1344,7 @@ static long ioctl_permit_fill(struct file *f, void __user *arg)
struct incfs_permit_fill __user *usr_permit_fill = arg;
struct incfs_permit_fill permit_fill;
long error = 0;
struct file *file = NULL;
struct file *file = 0;
if (f->f_op != &incfs_pending_read_file_ops)
return -EPERM;
@ -1419,7 +1406,7 @@ static long ioctl_read_file_signature(struct file *f, void __user *arg)
if (sig_buf_size > INCFS_MAX_SIGNATURE_SIZE)
return -E2BIG;
sig_buffer = kzalloc(sig_buf_size, GFP_NOFS | __GFP_COMP);
sig_buffer = kzalloc(sig_buf_size, GFP_NOFS);
if (!sig_buffer)
return -ENOMEM;
@ -1457,9 +1444,6 @@ static long ioctl_get_filled_blocks(struct file *f, void __user *arg)
if (!df)
return -EINVAL;
if ((uintptr_t)f->private_data != CAN_FILL)
return -EPERM;
if (copy_from_user(&args, args_usr_ptr, sizeof(args)) > 0)
return -EINVAL;
@ -1905,8 +1889,8 @@ static int file_open(struct inode *inode, struct file *file)
int err = 0;
get_incfs_backing_path(file->f_path.dentry, &backing_path);
backing_file = dentry_open(
&backing_path, O_RDWR | O_NOATIME | O_LARGEFILE, mi->mi_owner);
backing_file = dentry_open(&backing_path, O_RDWR | O_NOATIME,
mi->mi_owner);
path_put(&backing_path);
if (IS_ERR(backing_file)) {
@ -2036,45 +2020,6 @@ static void evict_inode(struct inode *inode)
clear_inode(inode);
}
static int incfs_setattr(struct dentry *dentry, struct iattr *ia)
{
struct dentry_info *di = get_incfs_dentry(dentry);
struct dentry *backing_dentry;
struct inode *backing_inode;
int error;
if (ia->ia_valid & ATTR_SIZE)
return -EINVAL;
if (!di)
return -EINVAL;
backing_dentry = di->backing_path.dentry;
if (!backing_dentry)
return -EINVAL;
backing_inode = d_inode(backing_dentry);
/* incfs files are readonly, but the backing files must be writeable */
if (S_ISREG(backing_inode->i_mode)) {
if ((ia->ia_valid & ATTR_MODE) && (ia->ia_mode & 0222))
return -EINVAL;
ia->ia_mode |= 0222;
}
inode_lock(d_inode(backing_dentry));
error = notify_change(backing_dentry, ia, NULL);
inode_unlock(d_inode(backing_dentry));
if (error)
return error;
if (S_ISREG(backing_inode->i_mode))
ia->ia_mode &= ~0222;
return simple_setattr(dentry, ia);
}
static ssize_t incfs_getxattr(struct dentry *d, const char *name,
void *value, size_t size)
{
@ -2267,9 +2212,10 @@ static int incfs_remount_fs(struct super_block *sb, int *flags, char *data)
if (err)
return err;
err = incfs_realloc_mount_info(mi, &options);
if (err)
return err;
if (mi->mi_options.read_timeout_ms != options.read_timeout_ms) {
mi->mi_options.read_timeout_ms = options.read_timeout_ms;
pr_debug("incfs: new timeout_ms=%d", options.read_timeout_ms);
}
pr_debug("incfs: remount\n");
return 0;

View file

@ -30,12 +30,6 @@
struct cpufreq_policy;
typedef int (*plat_mitig_t)(int cpu, u32 clip_freq);
struct cpu_cooling_ops {
plat_mitig_t ceil_limit, floor_limit;
};
#ifdef CONFIG_CPU_THERMAL
/**
* cpufreq_cooling_register - function to create cpufreq cooling device.
@ -44,10 +38,6 @@ struct cpu_cooling_ops {
struct thermal_cooling_device *
cpufreq_cooling_register(struct cpufreq_policy *policy);
struct thermal_cooling_device *
cpufreq_platform_cooling_register(struct cpufreq_policy *policy,
struct cpu_cooling_ops *ops);
/**
* cpufreq_cooling_unregister - function to remove cpufreq cooling device.
* @cdev: thermal cooling device pointer.
@ -61,13 +51,6 @@ cpufreq_cooling_register(struct cpufreq_policy *policy)
return ERR_PTR(-ENOSYS);
}
static inline struct thermal_cooling_device *
cpufreq_platform_cooling_register(struct cpufreq_policy *policy,
struct cpu_cooling_ops *ops)
{
return NULL;
}
static inline
void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
{

View file

@ -125,19 +125,14 @@
* @type: integer (intval)
* @value: 0 (USB/USB2) or 1 (USB3)
* @default: 0 (USB/USB2)
* - EXTCON_PROP_USB_TYPEC_MED_HIGH_CURRENT
* @type: integer (intval)
* @value: 0 (default current), 1 (medium or high current)
* @default: 0 (default current)
*
*/
#define EXTCON_PROP_USB_VBUS 0
#define EXTCON_PROP_USB_TYPEC_POLARITY 1
#define EXTCON_PROP_USB_SS 2
#define EXTCON_PROP_USB_TYPEC_MED_HIGH_CURRENT 3
#define EXTCON_PROP_USB_MIN 0
#define EXTCON_PROP_USB_MAX 3
#define EXTCON_PROP_USB_MAX 2
#define EXTCON_PROP_USB_CNT (EXTCON_PROP_USB_MAX - EXTCON_PROP_USB_MIN + 1)
/* Properties of EXTCON_TYPE_CHG. */

View file

@ -2245,6 +2245,7 @@ extern void zone_pcp_reset(struct zone *zone);
/* page_alloc.c */
extern int min_free_kbytes;
extern int watermark_boost_factor;
extern int watermark_scale_factor;
/* nommu.c */

View file

@ -277,9 +277,10 @@ enum zone_watermarks {
NR_WMARK
};
#define min_wmark_pages(z) (z->watermark[WMARK_MIN])
#define low_wmark_pages(z) (z->watermark[WMARK_LOW])
#define high_wmark_pages(z) (z->watermark[WMARK_HIGH])
#define min_wmark_pages(z) (z->_watermark[WMARK_MIN] + z->watermark_boost)
#define low_wmark_pages(z) (z->_watermark[WMARK_LOW] + z->watermark_boost)
#define high_wmark_pages(z) (z->_watermark[WMARK_HIGH] + z->watermark_boost)
#define wmark_pages(z, i) (z->_watermark[i] + z->watermark_boost)
struct per_cpu_pages {
int count; /* number of pages in the list */
@ -370,7 +371,8 @@ struct zone {
/* Read-mostly fields */
/* zone watermarks, access with *_wmark_pages(zone) macros */
unsigned long watermark[NR_WMARK];
unsigned long _watermark[NR_WMARK];
unsigned long watermark_boost;
unsigned long nr_reserved_highatomic;
@ -496,6 +498,8 @@ struct zone {
unsigned long compact_cached_free_pfn;
/* pfn where async and sync compaction migration scanner should start */
unsigned long compact_cached_migrate_pfn[2];
unsigned long compact_init_migrate_pfn;
unsigned long compact_init_free_pfn;
#endif
#ifdef CONFIG_COMPACTION
@ -903,6 +907,8 @@ static inline int is_highmem(struct zone *zone)
struct ctl_table;
int min_free_kbytes_sysctl_handler(struct ctl_table *, int,
void __user *, size_t *, loff_t *);
int watermark_boost_factor_sysctl_handler(struct ctl_table *, int,
void __user *, size_t *, loff_t *);
int watermark_scale_factor_sysctl_handler(struct ctl_table *, int,
void __user *, size_t *, loff_t *);
extern int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES];

View file

@ -352,7 +352,6 @@ enum power_supply_property {
POWER_SUPPLY_PROP_COMP_CLAMP_LEVEL,
POWER_SUPPLY_PROP_ADAPTER_CC_MODE,
POWER_SUPPLY_PROP_SKIN_HEALTH,
POWER_SUPPLY_PROP_CHARGE_DISABLE,
POWER_SUPPLY_PROP_ADAPTER_DETAILS,
POWER_SUPPLY_PROP_DEAD_BATTERY,
POWER_SUPPLY_PROP_VOLTAGE_FIFO,
@ -360,7 +359,6 @@ enum power_supply_property {
POWER_SUPPLY_PROP_OPERATING_FREQ,
POWER_SUPPLY_PROP_AICL_DELAY,
POWER_SUPPLY_PROP_AICL_ICL,
POWER_SUPPLY_PROP_RTX,
POWER_SUPPLY_PROP_CUTOFF_SOC,
POWER_SUPPLY_PROP_SYS_SOC,
POWER_SUPPLY_PROP_BATT_SOC,
@ -389,8 +387,6 @@ enum power_supply_property {
POWER_SUPPLY_PROP_IRQ_STATUS,
POWER_SUPPLY_PROP_PARALLEL_OUTPUT_MODE,
POWER_SUPPLY_PROP_ALIGNMENT,
POWER_SUPPLY_PROP_MOISTURE_DETECTION_ENABLE,
POWER_SUPPLY_PROP_FG_TYPE,
/* Local extensions of type int64_t */
POWER_SUPPLY_PROP_CHARGE_COUNTER_EXT,
POWER_SUPPLY_PROP_CHARGE_CHARGER_STATE,

View file

@ -487,7 +487,6 @@ devm_regulator_register(struct device *dev,
const struct regulator_desc *regulator_desc,
const struct regulator_config *config);
void regulator_unregister(struct regulator_dev *rdev);
void regulator_sync_state(struct device *dev);
void devm_regulator_unregister(struct device *dev, struct regulator_dev *rdev);
int regulator_notifier_call_chain(struct regulator_dev *rdev,

View file

@ -1,33 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2013, The Linux Foundation. All rights reserved.
*/
#ifndef _LINUX_REGULATOR_PROXY_CONSUMER_H_
#define _LINUX_REGULATOR_PROXY_CONSUMER_H_
#include <linux/device.h>
#include <linux/of.h>
struct proxy_consumer;
#ifdef CONFIG_REGULATOR_PROXY_CONSUMER
struct proxy_consumer *regulator_proxy_consumer_register(struct device *reg_dev,
struct device_node *reg_node);
int regulator_proxy_consumer_unregister(struct proxy_consumer *consumer);
#else
static inline struct proxy_consumer *regulator_proxy_consumer_register(
struct device *reg_dev, struct device_node *reg_node)
{ return NULL; }
static inline int regulator_proxy_consumer_unregister(
struct proxy_consumer *consumer)
{ return 0; }
#endif
#endif

View file

@ -29,7 +29,6 @@
#include <linux/mm_event.h>
#include <linux/task_io_accounting.h>
#include <linux/rseq.h>
#include <linux/android_kabi.h>
/* task_struct member predeclarations (sorted alphabetically): */
struct audit_context;
@ -487,11 +486,6 @@ struct sched_entity {
*/
struct sched_avg avg;
#endif
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
};
struct sched_rt_entity {
@ -510,11 +504,6 @@ struct sched_rt_entity {
/* rq "owned" by this entity/group: */
struct rt_rq *my_q;
#endif
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
} __randomize_layout;
struct sched_dl_entity {
@ -698,13 +687,6 @@ struct task_struct {
const struct sched_class *sched_class;
struct sched_entity se;
struct sched_rt_entity rt;
/* task boost vendor fields */
u64 last_sleep_ts;
int boost;
u64 boost_period;
u64 boost_expires;
#ifdef CONFIG_CGROUP_SCHED
struct task_group *sched_task_group;
#endif
@ -1293,15 +1275,6 @@ struct task_struct {
void *security;
#endif
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
ANDROID_KABI_RESERVE(5);
ANDROID_KABI_RESERVE(6);
ANDROID_KABI_RESERVE(7);
ANDROID_KABI_RESERVE(8);
/*
* New fields for task_struct should be added above here, so that
* they are included in the randomized portion of task_struct.

View file

@ -8,7 +8,6 @@
#include <linux/sched/jobctl.h>
#include <linux/sched/task.h>
#include <linux/cred.h>
#include <linux/android_kabi.h>
/*
* Types defining task->signal and task->sighand and APIs using them:
@ -233,10 +232,6 @@ struct signal_struct {
struct mutex cred_guard_mutex; /* guard against foreign influences on
* credential calculations
* (notably. ptrace) */
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
} __randomize_layout;
/*

View file

@ -3,7 +3,6 @@
#define _LINUX_SCHED_TOPOLOGY_H
#include <linux/topology.h>
#include <linux/android_kabi.h>
#include <linux/sched/idle.h>
@ -67,8 +66,6 @@ struct sched_domain_shared {
atomic_t ref;
atomic_t nr_busy_cpus;
int has_idle_cores;
bool overutilized;
};
struct sched_domain {
@ -144,10 +141,6 @@ struct sched_domain {
struct sched_domain_shared *shared;
unsigned int span_weight;
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
/*
* Span of all CPUs in this domain.
*

View file

@ -6,7 +6,6 @@
#include <linux/atomic.h>
#include <linux/refcount.h>
#include <linux/ratelimit.h>
#include <linux/android_kabi.h>
struct key;
@ -47,9 +46,6 @@ struct user_struct {
/* Miscellaneous per-user rate limit */
struct ratelimit_state ratelimit;
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
};
extern int uids_sysfs_init(void);

View file

@ -35,12 +35,6 @@
/* use value, which < 0K, to indicate an invalid/uninitialized temperature */
#define THERMAL_TEMP_INVALID -274000
/*
* use a high value for low temp tracking zone,
* to indicate an invalid/uninitialized temperature
*/
#define THERMAL_TEMP_INVALID_LOW 274000
/* Unit conversion macros */
#define DECI_KELVIN_TO_CELSIUS(t) ({ \
long _t = (t); \
@ -125,7 +119,6 @@ struct thermal_zone_device_ops {
enum thermal_trend *);
int (*notify) (struct thermal_zone_device *, int,
enum thermal_trip_type);
bool (*is_wakeable)(struct thermal_zone_device *);
int (*set_polling_delay)(struct thermal_zone_device *, int);
int (*set_passive_delay)(struct thermal_zone_device *, int);
};
@ -377,8 +370,6 @@ struct thermal_genl_event {
* temperature.
* @set_trip_temp: a pointer to a function that sets the trip temperature on
* hardware.
* @get_trip_temp: a pointer to a function that gets the trip temperature on
* hardware.
*/
struct thermal_zone_of_device_ops {
int (*get_temp)(void *, int *);
@ -386,7 +377,6 @@ struct thermal_zone_of_device_ops {
int (*set_trips)(void *, int, int);
int (*set_emul_temp)(void *, int);
int (*set_trip_temp)(void *, int, int);
int (*get_trip_temp)(void *, int, int *);
};
/**
@ -516,8 +506,6 @@ int thermal_zone_unbind_cooling_device(struct thermal_zone_device *, int,
struct thermal_cooling_device *);
void thermal_zone_device_update(struct thermal_zone_device *,
enum thermal_notify_event);
void thermal_zone_device_update_temp(struct thermal_zone_device *tz,
enum thermal_notify_event event, int temp);
void thermal_zone_set_trips(struct thermal_zone_device *);
struct thermal_cooling_device *thermal_cooling_device_register(const char *,
@ -572,10 +560,6 @@ static inline int thermal_zone_unbind_cooling_device(
static inline void thermal_zone_device_update(struct thermal_zone_device *tz,
enum thermal_notify_event event)
{ }
static inline void thermal_zone_device_update_temp(
struct thermal_zone_device *tz, enum thermal_notify_event event,
int temp)
{ }
static inline void thermal_zone_set_trips(struct thermal_zone_device *tz)
{ }
static inline struct thermal_cooling_device *

View file

@ -22,7 +22,6 @@
#include <linux/sched.h> /* for current && schedule_timeout */
#include <linux/mutex.h> /* for struct mutex */
#include <linux/pm_runtime.h> /* for runtime PM */
#include <linux/android_kabi.h>
struct usb_device;
struct usb_driver;
@ -258,11 +257,6 @@ struct usb_interface {
struct device dev; /* interface specific device info */
struct device *usb_dev;
struct work_struct reset_ws; /* for resets in atomic context */
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
};
#define to_usb_interface(d) container_of(d, struct usb_interface, dev)
@ -408,13 +402,6 @@ struct usb_host_bos {
struct usb_ssp_cap_descriptor *ssp_cap;
struct usb_ss_container_id_descriptor *ss_id;
struct usb_ptm_cap_descriptor *ptm_cap;
struct usb_config_summary_descriptor *config_summary;
unsigned int num_config_summary_desc;
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
};
int __usb_get_extra_descriptor(char *buffer, unsigned size,
@ -479,20 +466,6 @@ struct usb_bus {
struct mon_bus *mon_bus; /* non-null when associated */
int monitored; /* non-zero when monitored */
#endif
unsigned skip_resume:1; /* All USB devices are brought into full
* power state after system resume. It
* is desirable for some buses to keep
* their devices in suspend state even
* after system resume. The devices
* are resumed later when a remote
* wakeup is detected or an interface
* driver starts I/O.
*/
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
};
struct usb_dev_state;
@ -734,11 +707,6 @@ struct usb_device {
unsigned lpm_disable_count;
u16 hub_delay;
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
};
#define to_usb_device(d) container_of(d, struct usb_device, dev)
@ -856,20 +824,8 @@ static inline bool usb_device_no_sg_constraint(struct usb_device *udev)
/* for drivers using iso endpoints */
extern int usb_get_current_frame_number(struct usb_device *usb_dev);
extern int usb_sec_event_ring_setup(struct usb_device *dev,
unsigned int intr_num);
extern int usb_sec_event_ring_cleanup(struct usb_device *dev,
unsigned int intr_num);
extern phys_addr_t usb_get_sec_event_ring_phys_addr(
struct usb_device *dev, unsigned int intr_num, dma_addr_t *dma);
extern phys_addr_t usb_get_xfer_ring_phys_addr(struct usb_device *dev,
struct usb_host_endpoint *ep, dma_addr_t *dma);
extern int usb_get_controller_id(struct usb_device *dev);
extern int usb_stop_endpoint(struct usb_device *dev,
struct usb_host_endpoint *ep);
/* Sets up a group of bulk endpoints to support multiple stream IDs. */
extern int usb_alloc_streams(struct usb_interface *interface,
struct usb_host_endpoint **eps, unsigned int num_eps,
@ -1248,11 +1204,6 @@ struct usb_driver {
unsigned int supports_autosuspend:1;
unsigned int disable_hub_initiated_lpm:1;
unsigned int soft_unbind:1;
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
};
#define to_usb_driver(d) container_of(d, struct usb_driver, drvwrap.driver)
@ -1627,10 +1578,6 @@ struct urb {
usb_complete_t complete; /* (in) completion routine */
struct usb_iso_packet_descriptor iso_frame_desc[0];
/* (in) ISO ONLY */
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
};
/* ----------------------------------------------------------------------- */
@ -2044,11 +1991,8 @@ static inline int usb_translate_errors(int error_code)
#define USB_DEVICE_REMOVE 0x0002
#define USB_BUS_ADD 0x0003
#define USB_BUS_REMOVE 0x0004
#define USB_BUS_DIED 0x0005
extern void usb_register_notify(struct notifier_block *nb);
extern void usb_unregister_notify(struct notifier_block *nb);
extern void usb_register_atomic_notify(struct notifier_block *nb);
extern void usb_unregister_atomic_notify(struct notifier_block *nb);
/* debugfs stuff */
extern struct dentry *usb_debug_root;

View file

@ -73,7 +73,6 @@ struct usb_ep;
* Note that for writes (IN transfers) some data bytes may still
* reside in a device-side FIFO when the request is reported as
* complete.
* @udc_priv: Vendor private data in usage by the UDC.
*
* These are allocated/freed through the endpoint they're used with. The
* hardware's driver can add extra per-request data to the memory it returns,
@ -115,52 +114,6 @@ struct usb_request {
int status;
unsigned actual;
unsigned int udc_priv;
};
/*
* @buf_base_addr: Base pointer to buffer allocated for each GSI enabled EP.
* TRBs point to buffers that are split from this pool. The size of the
* buffer is num_bufs times buf_len. num_bufs and buf_len are determined
based on desired performance and aggregation size.
* @dma: DMA address corresponding to buf_base_addr.
* @num_bufs: Number of buffers associated with the GSI enabled EP. This
* corresponds to the number of non-zlp TRBs allocated for the EP.
* The value is determined based on desired performance for the EP.
* @buf_len: Size of each individual buffer is determined based on aggregation
* negotiated as per the protocol. In case of no aggregation supported by
* the protocol, we use default values.
* @db_reg_phs_addr_lsb: IPA channel doorbell register's physical address LSB
* @mapped_db_reg_phs_addr_lsb: doorbell LSB IOVA address mapped with IOMMU
* @db_reg_phs_addr_msb: IPA channel doorbell register's physical address MSB
*/
struct usb_gsi_request {
void *buf_base_addr;
dma_addr_t dma;
size_t num_bufs;
size_t buf_len;
u32 db_reg_phs_addr_lsb;
dma_addr_t mapped_db_reg_phs_addr_lsb;
u32 db_reg_phs_addr_msb;
struct sg_table sgt_trb_xfer_ring;
struct sg_table sgt_data_buff;
};
enum gsi_ep_op {
GSI_EP_OP_CONFIG = 0,
GSI_EP_OP_STARTXFER,
GSI_EP_OP_STORE_DBL_INFO,
GSI_EP_OP_ENABLE_GSI,
GSI_EP_OP_UPDATEXFER,
GSI_EP_OP_RING_DB,
GSI_EP_OP_ENDXFER,
GSI_EP_OP_GET_CH_INFO,
GSI_EP_OP_GET_XFER_IDX,
GSI_EP_OP_PREPARE_TRBS,
GSI_EP_OP_FREE_TRBS,
GSI_EP_OP_SET_CLR_BLOCK_DBL,
GSI_EP_OP_CHECK_FOR_SUSPEND,
GSI_EP_OP_DISABLE,
};
/*-------------------------------------------------------------------------*/
@ -191,9 +144,6 @@ struct usb_ep_ops {
int (*fifo_status) (struct usb_ep *ep);
void (*fifo_flush) (struct usb_ep *ep);
int (*gsi_ep_op) (struct usb_ep *ep, void *op_data,
enum gsi_ep_op op);
};
/**
@ -313,8 +263,6 @@ int usb_ep_clear_halt(struct usb_ep *ep);
int usb_ep_set_wedge(struct usb_ep *ep);
int usb_ep_fifo_status(struct usb_ep *ep);
void usb_ep_fifo_flush(struct usb_ep *ep);
int usb_gsi_ep_op(struct usb_ep *ep,
struct usb_gsi_request *req, enum gsi_ep_op op);
#else
static inline void usb_ep_set_maxpacket_limit(struct usb_ep *ep,
unsigned maxpacket_limit)
@ -344,10 +292,6 @@ static inline int usb_ep_fifo_status(struct usb_ep *ep)
{ return 0; }
static inline void usb_ep_fifo_flush(struct usb_ep *ep)
{ }
static inline int usb_gsi_ep_op(struct usb_ep *ep,
struct usb_gsi_request *req, enum gsi_ep_op op)
{ return 0; }
#endif /* USB_GADGET */
/*-------------------------------------------------------------------------*/
@ -385,7 +329,6 @@ struct usb_gadget_ops {
struct usb_ep *(*match_ep)(struct usb_gadget *,
struct usb_endpoint_descriptor *,
struct usb_ss_ep_comp_descriptor *);
int (*restart)(struct usb_gadget *g);
};
/**
@ -437,8 +380,6 @@ struct usb_gadget_ops {
* @connected: True if gadget is connected.
* @lpm_capable: If the gadget max_speed is FULL or HIGH, this flag
* indicates that it supports LPM as per the LPM ECN & errata.
* @remote_wakeup: Indicates if the host has enabled the remote_wakeup
* feature.
*
* Gadgets have a mostly-portable "gadget driver" implementing device
* functions, handling all usb configurations and interfaces. Gadget
@ -493,7 +434,6 @@ struct usb_gadget {
unsigned deactivated:1;
unsigned connected:1;
unsigned lpm_capable:1;
unsigned remote_wakeup:1;
};
#define work_to_gadget(w) (container_of((w), struct usb_gadget, work))
@ -642,8 +582,7 @@ static inline int usb_gadget_frame_number(struct usb_gadget *gadget)
{ return 0; }
static inline int usb_gadget_wakeup(struct usb_gadget *gadget)
{ return 0; }
static inline int usb_gadget_func_wakeup(struct usb_gadget *gadget,
int interface_id)
static int usb_gadget_func_wakeup(struct usb_gadget *gadget, int interface_id)
{ return 0; }
static inline int usb_gadget_set_selfpowered(struct usb_gadget *gadget)
{ return 0; }

View file

@ -25,7 +25,6 @@
#include <linux/rwsem.h>
#include <linux/interrupt.h>
#include <linux/idr.h>
#include <linux/android_kabi.h>
#define MAX_TOPO_LEVEL 6
@ -218,11 +217,6 @@ struct usb_hcd {
* (ohci 32, uhci 1024, ehci 256/512/1024).
*/
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
/* The HC driver's private data is stored at the end of
* this structure.
*/
@ -412,23 +406,8 @@ struct hc_driver {
int (*find_raw_port_number)(struct usb_hcd *, int);
/* Call for power on/off the port if necessary */
int (*port_power)(struct usb_hcd *hcd, int portnum, bool enable);
int (*get_core_id)(struct usb_hcd *hcd);
int (*sec_event_ring_setup)(struct usb_hcd *hcd, unsigned int intr_num);
int (*sec_event_ring_cleanup)(struct usb_hcd *hcd,
unsigned int intr_num);
phys_addr_t (*get_sec_event_ring_phys_addr)(struct usb_hcd *hcd,
unsigned int intr_num, dma_addr_t *dma);
phys_addr_t (*get_xfer_ring_phys_addr)(struct usb_hcd *hcd,
struct usb_device *udev, struct usb_host_endpoint *ep,
dma_addr_t *dma);
int (*get_core_id)(struct usb_hcd *hcd);
int (*stop_endpoint)(struct usb_hcd *hcd, struct usb_device *udev,
struct usb_host_endpoint *ep);
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
};
static inline int hcd_giveback_urb_in_bh(struct usb_hcd *hcd)
@ -467,17 +446,7 @@ extern int usb_hcd_alloc_bandwidth(struct usb_device *udev,
struct usb_host_interface *old_alt,
struct usb_host_interface *new_alt);
extern int usb_hcd_get_frame_number(struct usb_device *udev);
extern int usb_hcd_sec_event_ring_setup(struct usb_device *udev,
unsigned int intr_num);
extern int usb_hcd_sec_event_ring_cleanup(struct usb_device *udev,
unsigned int intr_num);
extern phys_addr_t usb_hcd_get_sec_event_ring_phys_addr(
struct usb_device *udev, unsigned int intr_num, dma_addr_t *dma);
extern phys_addr_t usb_hcd_get_xfer_ring_phys_addr(
struct usb_device *udev, struct usb_host_endpoint *ep, dma_addr_t *dma);
extern int usb_hcd_get_controller_id(struct usb_device *udev);
extern int usb_hcd_stop_endpoint(struct usb_device *udev,
struct usb_host_endpoint *ep);
struct usb_hcd *__usb_create_hcd(const struct hc_driver *driver,
struct device *sysdev, struct device *dev, const char *bus_name,
@ -583,11 +552,6 @@ struct usb_tt {
spinlock_t lock;
struct list_head clear_list; /* of usb_tt_clear */
struct work_struct clear_work;
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
};
struct usb_tt_clear {

View file

@ -15,17 +15,6 @@
#include <linux/usb.h>
#include <uapi/linux/usb/charger.h>
#define ENABLE_DP_MANUAL_PULLUP BIT(0)
#define ENABLE_SECONDARY_PHY BIT(1)
#define PHY_HOST_MODE BIT(2)
#define PHY_CHARGER_CONNECTED BIT(3)
#define PHY_VBUS_VALID_OVERRIDE BIT(4)
#define DEVICE_IN_SS_MODE BIT(5)
#define PHY_LANE_A BIT(6)
#define PHY_LANE_B BIT(7)
#define PHY_HSFS_MODE BIT(8)
#define PHY_LS_MODE BIT(9)
enum usb_phy_interface {
USBPHY_INTERFACE_MODE_UNKNOWN,
USBPHY_INTERFACE_MODE_UTMI,
@ -48,8 +37,6 @@ enum usb_phy_type {
USB_PHY_TYPE_UNDEFINED,
USB_PHY_TYPE_USB2,
USB_PHY_TYPE_USB3,
USB_PHY_TYPE_USB3_OR_DP,
USB_PHY_TYPE_USB3_AND_DP,
};
/* OTG defines lots of enumeration states before device reset */
@ -60,7 +47,6 @@ enum usb_otg_state {
OTG_STATE_B_IDLE,
OTG_STATE_B_SRP_INIT,
OTG_STATE_B_PERIPHERAL,
OTG_STATE_B_SUSPEND,
/* extra dual-role default-b states */
OTG_STATE_B_WAIT_ACON,
@ -169,10 +155,6 @@ struct usb_phy {
* manually detect the charger type.
*/
enum usb_charger_type (*charger_detect)(struct usb_phy *x);
/* reset the PHY clocks */
int (*reset)(struct usb_phy *x);
int (*drive_dp_pulse)(struct usb_phy *x, unsigned int pulse_width);
};
/* for board-specific init logic */
@ -231,24 +213,6 @@ usb_phy_vbus_off(struct usb_phy *x)
return x->set_vbus(x, false);
}
static inline int
usb_phy_reset(struct usb_phy *x)
{
if (x && x->reset)
return x->reset(x);
return 0;
}
static inline int
usb_phy_drive_dp_pulse(struct usb_phy *x, unsigned int pulse_width)
{
if (x && x->drive_dp_pulse)
return x->drive_dp_pulse(x, pulse_width);
return 0;
}
/* for usb host and peripheral controller drivers */
#if IS_ENABLED(CONFIG_USB_PHY)
extern struct usb_phy *usb_get_phy(enum usb_phy_type type);

View file

@ -23,8 +23,6 @@
#ifndef __LINUX_USB_USBNET_H
#define __LINUX_USB_USBNET_H
#include <linux/android_kabi.h>
/* interface from usbnet core to each USB networking link we handle */
struct usbnet {
/* housekeeping */
@ -85,11 +83,6 @@ struct usbnet {
# define EVENT_LINK_CHANGE 11
# define EVENT_SET_RX_MODE 12
# define EVENT_NO_IP_ALIGN 13
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);
};
static inline struct usb_driver *driver_of(struct usb_interface *intf)
@ -179,9 +172,6 @@ struct driver_info {
int out; /* tx endpoint */
unsigned long data; /* Misc driver specific data */
ANDROID_KABI_RESERVE(1);
ANDROID_KABI_RESERVE(2);
};
/* Minidrivers are just drivers using the "usbnet" core as a powerful

View file

@ -321,9 +321,6 @@ struct incfs_get_filled_blocks_args {
/* Actual number of blocks in file */
__u32 total_blocks_out;
/* The number of data blocks in file */
__u32 data_blocks_out;
/* Number of bytes written to range buffer */
__u32 range_buffer_size_out;

View file

@ -1081,26 +1081,6 @@ struct usb_ptm_cap_descriptor {
*/
#define USB_DT_USB_SSP_CAP_SIZE(ssac) (12 + (ssac + 1) * 4)
/*
* Configuration Summary descriptors: Defines a list of device preferred
* configurations. This descriptor may be used by Host software to decide
* which Configuration to use to obtain the desired functionality.
*/
#define USB_CAP_TYPE_CONFIG_SUMMARY 0x10
#define USB_CONFIG_SUMMARY_DESC_REV 0x100
struct usb_config_summary_descriptor {
__u8 bLength;
__u8 bDescriptorType;
__u8 bDevCapabilityType;
__u16 bcdVersion;
__u8 bClass;
__u8 bSubClass;
__u8 bProtocol;
__u8 bConfigurationCount;
__u8 bConfigurationIndex[];
} __attribute__((packed));
/*-------------------------------------------------------------------------*/
/* USB_DT_WIRELESS_ENDPOINT_COMP: companion descriptor associated with

View file

@ -19,7 +19,6 @@ endif
obj-y += core.o loadavg.o clock.o cputime.o
obj-y += idle.o fair.o rt.o deadline.o
obj-y += wait.o wait_bit.o swait.o completion.o
obj-y += stubs.o
obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o pelt.o
obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o

View file

@ -1,30 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Symbols stubs needed for GKI compliance
*/
#include "sched.h"
int sched_isolate_cpu(int cpu)
{
return -EINVAL;
}
EXPORT_SYMBOL_GPL(sched_isolate_cpu);
int sched_unisolate_cpu_unlocked(int cpu)
{
return -EINVAL;
}
EXPORT_SYMBOL_GPL(sched_unisolate_cpu_unlocked);
int sched_unisolate_cpu(int cpu)
{
return -EINVAL;
}
EXPORT_SYMBOL_GPL(sched_unisolate_cpu);
int set_task_boost(int boost, u64 period)
{
return -EINVAL;
}
EXPORT_SYMBOL_GPL(set_task_boost);

View file

@ -1494,6 +1494,14 @@ static struct ctl_table vm_table[] = {
.proc_handler = min_free_kbytes_sysctl_handler,
.extra1 = &zero,
},
{
.procname = "watermark_boost_factor",
.data = &watermark_boost_factor,
.maxlen = sizeof(watermark_boost_factor),
.mode = 0644,
.proc_handler = watermark_boost_factor_sysctl_handler,
.extra1 = &zero,
},
{
.procname = "watermark_scale_factor",
.data = &watermark_scale_factor,

View file

@ -1733,7 +1733,6 @@ int ptr_to_hashval(const void *ptr, unsigned long *hashval_out)
{
return __ptr_to_hashval(ptr, hashval_out);
}
EXPORT_SYMBOL_GPL(ptr_to_hashval);
/* Maps a pointer to a 32 bit unique identifier. */
static char *ptr_to_id(char *buf, char *end, void *ptr, struct printf_spec spec)

View file

@ -237,6 +237,70 @@ static bool pageblock_skip_persistent(struct page *page)
return false;
}
static bool
__reset_isolation_pfn(struct zone *zone, unsigned long pfn, bool check_source,
bool check_target)
{
struct page *page = pfn_to_online_page(pfn);
struct page *end_page;
unsigned long block_pfn;
if (!page)
return false;
if (zone != page_zone(page))
return false;
if (pageblock_skip_persistent(page))
return false;
/*
* If skip is already cleared do no further checking once the
* restart points have been set.
*/
if (check_source && check_target && !get_pageblock_skip(page))
return true;
/*
* If clearing skip for the target scanner, do not select a
* non-movable pageblock as the starting point.
*/
if (!check_source && check_target &&
get_pageblock_migratetype(page) != MIGRATE_MOVABLE)
return false;
/*
* Only clear the hint if a sample indicates there is either a
* free page or an LRU page in the block. One or other condition
* is necessary for the block to be a migration source/target.
*/
block_pfn = pageblock_start_pfn(pfn);
pfn = max(block_pfn, zone->zone_start_pfn);
page = pfn_to_page(pfn);
if (zone != page_zone(page))
return false;
pfn = block_pfn + pageblock_nr_pages;
pfn = min(pfn, zone_end_pfn(zone));
end_page = pfn_to_page(pfn);
do {
if (pfn_valid_within(pfn)) {
if (check_source && PageLRU(page)) {
clear_pageblock_skip(page);
return true;
}
if (check_target && PageBuddy(page)) {
clear_pageblock_skip(page);
return true;
}
}
page += (1 << PAGE_ALLOC_COSTLY_ORDER);
pfn += (1 << PAGE_ALLOC_COSTLY_ORDER);
} while (page < end_page);
return false;
}
/*
* This function is called to clear all cached information on pageblocks that
* should be skipped for page isolation when the migrate and free page scanner
@ -244,30 +308,54 @@ static bool pageblock_skip_persistent(struct page *page)
*/
static void __reset_isolation_suitable(struct zone *zone)
{
unsigned long start_pfn = zone->zone_start_pfn;
unsigned long end_pfn = zone_end_pfn(zone);
unsigned long pfn;
unsigned long migrate_pfn = zone->zone_start_pfn;
unsigned long free_pfn = zone_end_pfn(zone);
unsigned long reset_migrate = free_pfn;
unsigned long reset_free = migrate_pfn;
bool source_set = false;
bool free_set = false;
if (!zone->compact_blockskip_flush)
return;
zone->compact_blockskip_flush = false;
/* Walk the zone and mark every pageblock as suitable for isolation */
for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) {
struct page *page;
/*
* Walk the zone and update pageblock skip information. Source looks
* for PageLRU while target looks for PageBuddy. When the scanner
* is found, both PageBuddy and PageLRU are checked as the pageblock
* is suitable as both source and target.
*/
for (; migrate_pfn < free_pfn; migrate_pfn += pageblock_nr_pages,
free_pfn -= pageblock_nr_pages) {
cond_resched();
page = pfn_to_online_page(pfn);
if (!page)
continue;
if (zone != page_zone(page))
continue;
if (pageblock_skip_persistent(page))
continue;
/* Update the migrate PFN */
if (__reset_isolation_pfn(zone, migrate_pfn, true, source_set) &&
migrate_pfn < reset_migrate) {
source_set = true;
reset_migrate = migrate_pfn;
zone->compact_init_migrate_pfn = reset_migrate;
zone->compact_cached_migrate_pfn[0] = reset_migrate;
zone->compact_cached_migrate_pfn[1] = reset_migrate;
}
clear_pageblock_skip(page);
/* Update the free PFN */
if (__reset_isolation_pfn(zone, free_pfn, free_set, true) &&
free_pfn > reset_free) {
free_set = true;
reset_free = free_pfn;
zone->compact_init_free_pfn = reset_free;
zone->compact_cached_free_pfn = reset_free;
}
}
reset_cached_positions(zone);
/* Leave no distance if no suitable block was reset */
if (reset_migrate >= reset_free) {
zone->compact_cached_migrate_pfn[0] = migrate_pfn;
zone->compact_cached_migrate_pfn[1] = migrate_pfn;
zone->compact_cached_free_pfn = free_pfn;
}
}
void reset_isolation_suitable(pg_data_t *pgdat)
@ -1431,7 +1519,7 @@ static enum compact_result __compaction_suitable(struct zone *zone, int order,
if (is_via_compact_memory(order))
return COMPACT_CONTINUE;
watermark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK);
/*
* If watermarks for high-order allocation are already met, there
* should be no need for compaction at all.
@ -1591,7 +1679,7 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro
zone->compact_cached_migrate_pfn[1] = cc->migrate_pfn;
}
if (cc->migrate_pfn == start_pfn)
if (cc->migrate_pfn <= cc->zone->compact_init_migrate_pfn)
cc->whole_zone = true;
}

View file

@ -490,10 +490,16 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
#define ALLOC_OOM ALLOC_NO_WATERMARKS
#endif
#define ALLOC_HARDER 0x10 /* try to alloc harder */
#define ALLOC_HIGH 0x20 /* __GFP_HIGH set */
#define ALLOC_CPUSET 0x40 /* check for correct cpuset */
#define ALLOC_CMA 0x80 /* allow allocations from CMA areas */
#define ALLOC_HARDER 0x10 /* try to alloc harder */
#define ALLOC_HIGH 0x20 /* __GFP_HIGH set */
#define ALLOC_CPUSET 0x40 /* check for correct cpuset */
#define ALLOC_CMA 0x80 /* allow allocations from CMA areas */
#ifdef CONFIG_ZONE_DMA32
#define ALLOC_NOFRAGMENT 0x100 /* avoid mixing pageblock types */
#else
#define ALLOC_NOFRAGMENT 0x0
#endif
#define ALLOC_KSWAPD 0x200 /* allow waking of kswapd */
enum ttu_flags;
struct tlbflush_unmap_batch;

View file

@ -318,6 +318,7 @@ compound_page_dtor * const compound_page_dtors[] = {
*/
int min_free_kbytes = 1024;
int user_min_free_kbytes = -1;
int watermark_boost_factor __read_mostly = 15000;
int watermark_scale_factor = 10;
/*
@ -2219,6 +2220,21 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
return false;
}
static inline void boost_watermark(struct zone *zone)
{
unsigned long max_boost;
if (!watermark_boost_factor)
return;
max_boost = mult_frac(zone->_watermark[WMARK_HIGH],
watermark_boost_factor, 10000);
max_boost = max(pageblock_nr_pages, max_boost);
zone->watermark_boost = min(zone->watermark_boost + pageblock_nr_pages,
max_boost);
}
/*
* This function implements actual steal behaviour. If order is large enough,
* we can steal whole pageblock. If not, we first move freepages in this
@ -2228,7 +2244,7 @@ static bool can_steal_fallback(unsigned int order, int start_mt)
* itself, so pages freed in the future will be put on the correct free list.
*/
static void steal_suitable_fallback(struct zone *zone, struct page *page,
int start_type, bool whole_block)
unsigned int alloc_flags, int start_type, bool whole_block)
{
unsigned int current_order = page_order(page);
struct free_area *area;
@ -2250,6 +2266,15 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
goto single_page;
}
/*
* Boost watermarks to increase reclaim pressure to reduce the
* likelihood of future fallbacks. Wake kswapd now as the node
* may be balanced overall and kswapd will not wake naturally.
*/
boost_watermark(zone);
if (alloc_flags & ALLOC_KSWAPD)
wakeup_kswapd(zone, 0, 0, zone_idx(zone));
/* We are not allowed to try stealing from the whole block */
if (!whole_block)
goto single_page;
@ -2465,20 +2490,30 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
* condition simpler.
*/
static __always_inline bool
__rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
__rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
unsigned int alloc_flags)
{
struct free_area *area;
int current_order;
int min_order = order;
struct page *page;
int fallback_mt;
bool can_steal;
/*
* Do not steal pages from freelists belonging to other pageblocks
* i.e. orders < pageblock_order. If there are no local zones free,
* the zonelists will be reiterated without ALLOC_NOFRAGMENT.
*/
if (alloc_flags & ALLOC_NOFRAGMENT)
min_order = pageblock_order;
/*
* Find the largest available free page in the other list. This roughly
* approximates finding the pageblock with the most free pages, which
* would be too costly to do exactly.
*/
for (current_order = MAX_ORDER - 1; current_order >= order;
for (current_order = MAX_ORDER - 1; current_order >= min_order;
--current_order) {
area = &(zone->free_area[current_order]);
fallback_mt = find_suitable_fallback(area, current_order,
@ -2523,7 +2558,8 @@ do_steal:
page = list_first_entry(&area->free_list[fallback_mt],
struct page, lru);
steal_suitable_fallback(zone, page, start_migratetype, can_steal);
steal_suitable_fallback(zone, page, alloc_flags, start_migratetype,
can_steal);
trace_mm_page_alloc_extfrag(page, order, current_order,
start_migratetype, fallback_mt);
@ -2537,14 +2573,16 @@ do_steal:
* Call me with the zone->lock already held.
*/
static __always_inline struct page *
__rmqueue(struct zone *zone, unsigned int order, int migratetype)
__rmqueue(struct zone *zone, unsigned int order, int migratetype,
unsigned int alloc_flags)
{
struct page *page;
retry:
page = __rmqueue_smallest(zone, order, migratetype);
if (unlikely(!page) && __rmqueue_fallback(zone, order, migratetype))
if (unlikely(!page) && __rmqueue_fallback(zone, order, migratetype,
alloc_flags))
goto retry;
trace_mm_page_alloc_zone_locked(page, order, migratetype);
@ -2576,7 +2614,7 @@ static inline struct page *__rmqueue_cma(struct zone *zone, unsigned int order)
*/
static int rmqueue_bulk(struct zone *zone, unsigned int order,
unsigned long count, struct list_head *list,
int migratetype)
int migratetype, unsigned int alloc_flags)
{
int i, alloced = 0;
@ -2592,7 +2630,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
if (is_migrate_cma(migratetype))
page = __rmqueue_cma(zone, order);
else
page = __rmqueue(zone, order, migratetype);
page = __rmqueue(zone, order, migratetype, alloc_flags);
if (unlikely(page == NULL))
break;
@ -2635,14 +2673,14 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
*/
static struct list_head *get_populated_pcp_list(struct zone *zone,
unsigned int order, struct per_cpu_pages *pcp,
int migratetype)
int migratetype, unsigned int alloc_flags)
{
struct list_head *list = &pcp->lists[migratetype];
if (list_empty(list)) {
pcp->count += rmqueue_bulk(zone, order,
pcp->batch, list,
migratetype);
migratetype, alloc_flags);
if (list_empty(list))
list = NULL;
@ -3071,6 +3109,7 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z)
/* Remove page from the per-cpu list, caller must protect the list */
static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
unsigned int alloc_flags,
struct per_cpu_pages *pcp,
gfp_t gfp_flags)
{
@ -3082,7 +3121,7 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
if (migratetype == MIGRATE_MOVABLE &&
gfp_flags & __GFP_CMA) {
list = get_populated_pcp_list(zone, 0, pcp,
get_cma_migrate_type());
get_cma_migrate_type(), alloc_flags);
}
if (list == NULL) {
@ -3091,7 +3130,7 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
* free CMA pages.
*/
list = get_populated_pcp_list(zone, 0, pcp,
migratetype);
migratetype, alloc_flags);
if (unlikely(list == NULL) ||
unlikely(list_empty(list)))
return NULL;
@ -3108,7 +3147,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
/* Lock and remove page from the per-cpu list */
static struct page *rmqueue_pcplist(struct zone *preferred_zone,
struct zone *zone, unsigned int order,
gfp_t gfp_flags, int migratetype)
gfp_t gfp_flags, int migratetype,
unsigned int alloc_flags)
{
struct per_cpu_pages *pcp;
struct page *page;
@ -3116,7 +3156,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
local_irq_save(flags);
pcp = &this_cpu_ptr(zone->pageset)->pcp;
page = __rmqueue_pcplist(zone, migratetype, pcp,
page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp,
gfp_flags);
if (page) {
__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
@ -3140,7 +3180,7 @@ struct page *rmqueue(struct zone *preferred_zone,
if (likely(order == 0)) {
page = rmqueue_pcplist(preferred_zone, zone, order,
gfp_flags, migratetype);
gfp_flags, migratetype, alloc_flags);
goto out;
}
@ -3165,7 +3205,7 @@ struct page *rmqueue(struct zone *preferred_zone,
page = __rmqueue_cma(zone, order);
if (!page)
page = __rmqueue(zone, order, migratetype);
page = __rmqueue(zone, order, migratetype, alloc_flags);
} while (page && check_new_pages(page, order));
spin_unlock(&zone->lock);
@ -3417,6 +3457,40 @@ static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
}
#endif /* CONFIG_NUMA */
/*
* The restriction on ZONE_DMA32 as being a suitable zone to use to avoid
* fragmentation is subtle. If the preferred zone was HIGHMEM then
* premature use of a lower zone may cause lowmem pressure problems that
* are worse than fragmentation. If the next zone is ZONE_DMA then it is
* probably too small. It only makes sense to spread allocations to avoid
* fragmentation between the Normal and DMA32 zones.
*/
static inline unsigned int
alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask)
{
unsigned int alloc_flags = 0;
if (gfp_mask & __GFP_KSWAPD_RECLAIM)
alloc_flags |= ALLOC_KSWAPD;
#ifdef CONFIG_ZONE_DMA32
if (zone_idx(zone) != ZONE_NORMAL)
goto out;
/*
* If ZONE_DMA32 exists, assume it is the one after ZONE_NORMAL and
* the pointer is within zone->zone_pgdat->node_zones[]. Also assume
* on UMA that if Normal is populated then so is DMA32.
*/
BUILD_BUG_ON(ZONE_NORMAL - ZONE_DMA32 != 1);
if (nr_online_nodes > 1 && !populated_zone(--zone))
goto out;
out:
#endif /* CONFIG_ZONE_DMA32 */
return alloc_flags;
}
/*
* get_page_from_freelist goes through the zonelist trying to allocate
* a page.
@ -3425,14 +3499,18 @@ static struct page *
get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
const struct alloc_context *ac)
{
struct zoneref *z = ac->preferred_zoneref;
struct zoneref *z;
struct zone *zone;
struct pglist_data *last_pgdat_dirty_limit = NULL;
bool no_fallback;
retry:
/*
* Scan zonelist, looking for a zone with enough free.
* See also __cpuset_node_allowed() comment in kernel/cpuset.c.
*/
no_fallback = alloc_flags & ALLOC_NOFRAGMENT;
z = ac->preferred_zoneref;
for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx,
ac->nodemask) {
struct page *page;
@ -3471,7 +3549,23 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
}
}
mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK];
if (no_fallback && nr_online_nodes > 1 &&
zone != ac->preferred_zoneref->zone) {
int local_nid;
/*
* If moving to a remote node, retry but allow
* fragmenting fallbacks. Locality is more important
* than fragmentation avoidance.
*/
local_nid = zone_to_nid(ac->preferred_zoneref->zone);
if (zone_to_nid(zone) != local_nid) {
alloc_flags &= ~ALLOC_NOFRAGMENT;
goto retry;
}
}
mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK);
if (!zone_watermark_fast(zone, order, mark,
ac_classzone_idx(ac), alloc_flags)) {
int ret;
@ -3538,6 +3632,15 @@ try_this_zone:
}
}
/*
* It's possible on a UMA machine to get through all zones that are
* fragmented. If avoiding fragmentation, reset and try again.
*/
if (no_fallback) {
alloc_flags &= ~ALLOC_NOFRAGMENT;
goto retry;
}
return NULL;
}
@ -4039,6 +4142,9 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
} else if (unlikely(rt_task(current)) && !in_interrupt())
alloc_flags |= ALLOC_HARDER;
if (gfp_mask & __GFP_KSWAPD_RECLAIM)
alloc_flags |= ALLOC_KSWAPD;
#ifdef CONFIG_CMA
if (gfpflags_to_migratetype(gfp_mask) == MIGRATE_MOVABLE)
alloc_flags |= ALLOC_CMA;
@ -4270,7 +4376,7 @@ retry_cpuset:
if (!ac->preferred_zoneref->zone)
goto nopage;
if (gfp_mask & __GFP_KSWAPD_RECLAIM)
if (alloc_flags & ALLOC_KSWAPD)
wake_all_kswapds(order, gfp_mask, ac);
/*
@ -4328,7 +4434,7 @@ retry_cpuset:
retry:
/* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
if (gfp_mask & __GFP_KSWAPD_RECLAIM)
if (alloc_flags & ALLOC_KSWAPD)
wake_all_kswapds(order, gfp_mask, ac);
reserve_flags = __gfp_pfmemalloc_flags(gfp_mask);
@ -4547,6 +4653,12 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
finalise_ac(gfp_mask, &ac);
/*
* Forbid the first pass from falling back to types that fragment
* memory until all local zones are considered.
*/
alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone, gfp_mask);
/* First allocation attempt */
page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac);
if (likely(page))
@ -4887,7 +4999,7 @@ long si_mem_available(void)
pages[lru] = global_node_page_state(NR_LRU_BASE + lru);
for_each_zone(zone)
wmark_low += zone->watermark[WMARK_LOW];
wmark_low += low_wmark_pages(zone);
/*
* Estimate the amount of memory available for userspace allocations,
@ -7461,13 +7573,13 @@ static void __setup_per_zone_wmarks(void)
min_pages = zone->managed_pages / 1024;
min_pages = clamp(min_pages, SWAP_CLUSTER_MAX, 128UL);
zone->watermark[WMARK_MIN] = min_pages;
zone->_watermark[WMARK_MIN] = min_pages;
} else {
/*
* If it's a lowmem zone, reserve a number of pages
* proportionate to the zone's size.
*/
zone->watermark[WMARK_MIN] = min;
zone->_watermark[WMARK_MIN] = min;
}
/*
@ -7479,10 +7591,11 @@ static void __setup_per_zone_wmarks(void)
mult_frac(zone->managed_pages,
watermark_scale_factor, 10000));
zone->watermark[WMARK_LOW] = min_wmark_pages(zone) +
zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) +
low + min;
zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) +
zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) +
low + min * 2;
zone->watermark_boost = 0;
spin_unlock_irqrestore(&zone->lock, flags);
}
@ -7583,6 +7696,18 @@ int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write,
return 0;
}
int watermark_boost_factor_sysctl_handler(struct ctl_table *table, int write,
void __user *buffer, size_t *length, loff_t *ppos)
{
int rc;
rc = proc_dointvec_minmax(table, write, buffer, length, ppos);
if (rc)
return rc;
return 0;
}
int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write,
void __user *buffer, size_t *length, loff_t *ppos)
{

View file

@ -87,6 +87,9 @@ struct scan_control {
/* Can pages be swapped as part of reclaim? */
unsigned int may_swap:1;
/* e.g. boosted watermark reclaim leaves slabs alone */
unsigned int may_shrinkslab:1;
/*
* Cgroups are not reclaimed below their configured memory.low,
* unless we threaten to OOM. If any cgroups are skipped due to
@ -2739,8 +2742,10 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
shrink_node_memcg(pgdat, memcg, sc, &lru_pages);
node_lru_pages += lru_pages;
shrink_slab(sc->gfp_mask, pgdat->node_id,
if (sc->may_shrinkslab) {
shrink_slab(sc->gfp_mask, pgdat->node_id,
memcg, sc->priority);
}
/* Record the group's reclaim efficiency */
vmpressure(sc->gfp_mask, memcg, false,
@ -3218,6 +3223,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
.may_writepage = !laptop_mode,
.may_unmap = 1,
.may_swap = 1,
.may_shrinkslab = 1,
};
/*
@ -3262,6 +3268,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg,
.may_unmap = 1,
.reclaim_idx = MAX_NR_ZONES - 1,
.may_swap = !noswap,
.may_shrinkslab = 1,
};
unsigned long lru_pages;
@ -3308,6 +3315,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,
.may_writepage = !laptop_mode,
.may_unmap = 1,
.may_swap = may_swap,
.may_shrinkslab = 1,
};
/*
@ -3358,6 +3366,30 @@ static void age_active_anon(struct pglist_data *pgdat,
} while (memcg);
}
static bool pgdat_watermark_boosted(pg_data_t *pgdat, int classzone_idx)
{
int i;
struct zone *zone;
/*
* Check for watermark boosts top-down as the higher zones
* are more likely to be boosted. Both watermarks and boosts
* should not be checked at the time time as reclaim would
* start prematurely when there is no boosting and a lower
* zone is balanced.
*/
for (i = classzone_idx; i >= 0; i--) {
zone = pgdat->node_zones + i;
if (!managed_zone(zone))
continue;
if (zone->watermark_boost)
return true;
}
return false;
}
/*
* Returns true if there is an eligible zone balanced for the request order
* and classzone_idx
@ -3368,6 +3400,10 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int classzone_idx)
unsigned long mark = -1;
struct zone *zone;
/*
* Check watermarks bottom-up as lower zones are more likely to
* meet watermarks.
*/
for (i = 0; i <= classzone_idx; i++) {
zone = pgdat->node_zones + i;
@ -3496,14 +3532,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
unsigned long nr_soft_reclaimed;
unsigned long nr_soft_scanned;
unsigned long pflags;
unsigned long nr_boost_reclaim;
unsigned long zone_boosts[MAX_NR_ZONES] = { 0, };
bool boosted;
struct zone *zone;
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
.order = order,
.priority = DEF_PRIORITY,
.may_writepage = !laptop_mode,
.may_unmap = 1,
.may_swap = 1,
};
psi_memstall_enter(&pflags);
@ -3511,9 +3547,28 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
count_vm_event(PAGEOUTRUN);
/*
* Account for the reclaim boost. Note that the zone boost is left in
* place so that parallel allocations that are near the watermark will
* stall or direct reclaim until kswapd is finished.
*/
nr_boost_reclaim = 0;
for (i = 0; i <= classzone_idx; i++) {
zone = pgdat->node_zones + i;
if (!managed_zone(zone))
continue;
nr_boost_reclaim += zone->watermark_boost;
zone_boosts[i] = zone->watermark_boost;
}
boosted = nr_boost_reclaim;
restart:
sc.priority = DEF_PRIORITY;
do {
unsigned long nr_reclaimed = sc.nr_reclaimed;
bool raise_priority = true;
bool balanced;
bool ret;
sc.reclaim_idx = classzone_idx;
@ -3540,13 +3595,40 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
}
/*
* Only reclaim if there are no eligible zones. Note that
* sc.reclaim_idx is not used as buffer_heads_over_limit may
* have adjusted it.
* If the pgdat is imbalanced then ignore boosting and preserve
* the watermarks for a later time and restart. Note that the
* zone watermarks will be still reset at the end of balancing
* on the grounds that the normal reclaim should be enough to
* re-evaluate if boosting is required when kswapd next wakes.
*/
if (pgdat_balanced(pgdat, sc.order, classzone_idx))
balanced = pgdat_balanced(pgdat, sc.order, classzone_idx);
if (!balanced && nr_boost_reclaim) {
nr_boost_reclaim = 0;
goto restart;
}
/*
* If boosting is not active then only reclaim if there are no
* eligible zones. Note that sc.reclaim_idx is not used as
* buffer_heads_over_limit may have adjusted it.
*/
if (!nr_boost_reclaim && balanced)
goto out;
/* Limit the priority of boosting to avoid reclaim writeback */
if (nr_boost_reclaim && sc.priority == DEF_PRIORITY - 2)
raise_priority = false;
/*
* Do not writeback or swap pages for boosted reclaim. The
* intent is to relieve pressure not issue sub-optimal IO
* from reclaim context. If no pages are reclaimed, the
* reclaim will be aborted.
*/
sc.may_writepage = !laptop_mode && !nr_boost_reclaim;
sc.may_swap = !nr_boost_reclaim;
sc.may_shrinkslab = !nr_boost_reclaim;
/*
* Do some background aging of the anon list, to give
* pages a chance to be referenced before reclaiming. All
@ -3598,6 +3680,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
* progress in reclaiming pages
*/
nr_reclaimed = sc.nr_reclaimed - nr_reclaimed;
nr_boost_reclaim -= min(nr_boost_reclaim, nr_reclaimed);
/*
* If reclaim made no progress for a boost, stop reclaim as
* IO cannot be queued and it could be an infinite loop in
* extreme circumstances.
*/
if (nr_boost_reclaim && !nr_reclaimed)
break;
if (raise_priority || !nr_reclaimed)
sc.priority--;
} while (sc.priority >= 1);
@ -3606,6 +3698,28 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
pgdat->kswapd_failures++;
out:
/* If reclaim was boosted, account for the reclaim done in this pass */
if (boosted) {
unsigned long flags;
for (i = 0; i <= classzone_idx; i++) {
if (!zone_boosts[i])
continue;
/* Increments are under the zone lock */
zone = pgdat->node_zones + i;
spin_lock_irqsave(&zone->lock, flags);
zone->watermark_boost -= min(zone->watermark_boost, zone_boosts[i]);
spin_unlock_irqrestore(&zone->lock, flags);
}
/*
* As there is now likely space, wakeup kcompact to defragment
* pageblocks.
*/
wakeup_kcompactd(pgdat, pageblock_order, classzone_idx);
}
snapshot_refaults(NULL, pgdat);
__fs_reclaim_release();
psi_memstall_leave(&pflags);
@ -3837,7 +3951,8 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order,
/* Hopeless node, leave it to direct reclaim if possible */
if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES ||
pgdat_balanced(pgdat, order, classzone_idx)) {
(pgdat_balanced(pgdat, order, classzone_idx) &&
!pgdat_watermark_boosted(pgdat, classzone_idx))) {
/*
* There may be plenty of free memory available, but it's too
* fragmented for high-order allocations. Wake up kcompactd

View file

@ -1,11 +1,18 @@
# SPDX-License-Identifier: GPL-2.0
CFLAGS += -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -Wall
CFLAGS += -I../.. -I../../../../..
CFLAGS += -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -Wall -lssl -lcrypto -llz4
CFLAGS += -I../../../../../usr/include/
CFLAGS += -I../../../../include/uapi/
CFLAGS += -I../../../../lib
LDLIBS := -llz4 -lcrypto
EXTRA_SOURCES := utils.c
CFLAGS += $(EXTRA_SOURCES)
TEST_GEN_PROGS := incfs_test
$(TEST_GEN_PROGS): $(EXTRA_SOURCES)
include ../../lib.mk
$(OUTPUT)incfs_test: incfs_test.c $(EXTRA_SOURCES)
all: $(OUTPUT)incfs_test
clean:
rm -rf $(OUTPUT)incfs_test *.o

View file

@ -0,0 +1 @@
CONFIG_INCREMENTAL_FS=y

View file

@ -2,29 +2,27 @@
/*
* Copyright 2018 Google LLC
*/
#include <alloca.h>
#include <dirent.h>
#include <errno.h>
#include <fcntl.h>
#include <lz4.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/mount.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <dirent.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/mount.h>
#include <errno.h>
#include <sys/wait.h>
#include <sys/xattr.h>
#include <alloca.h>
#include <string.h>
#include <stdio.h>
#include <stdbool.h>
#include <stdint.h>
#include <linux/random.h>
#include <linux/unistd.h>
#include <kselftest.h>
#include "../../kselftest.h"
#include "lz4.h"
#include "utils.h"
#define TEST_FAILURE 1
@ -210,7 +208,7 @@ int open_file_by_id(const char *mnt_dir, incfs_uuid_t id, bool use_ioctl)
{
char *path = get_index_filename(mnt_dir, id);
int cmd_fd = open_commands_file(mnt_dir);
int fd = open(path, O_RDWR | O_CLOEXEC);
int fd = open(path, O_RDWR);
struct incfs_permit_fill permit_fill = {
.file_descriptor = fd,
};
@ -283,7 +281,7 @@ static int emit_test_blocks(char *mnt_dir, struct test_file *file,
.fill_blocks = ptr_to_u64(block_buf),
};
ssize_t write_res = 0;
int fd = -1;
int fd;
int error = 0;
int i = 0;
int blocks_written = 0;
@ -446,7 +444,7 @@ static loff_t read_whole_file(char *filename)
loff_t bytes_read = 0;
uint8_t buff[16 * 1024];
fd = open(filename, O_RDONLY | O_CLOEXEC);
fd = open(filename, O_RDONLY);
if (fd <= 0)
return fd;
@ -478,7 +476,7 @@ static int read_test_file(uint8_t *buf, size_t len, char *filename,
size_t bytes_to_read = len;
off_t offset = ((off_t)block_idx) * INCFS_DATA_FILE_BLOCK_SIZE;
fd = open(filename, O_RDONLY | O_CLOEXEC);
fd = open(filename, O_RDONLY);
if (fd <= 0)
return fd;
@ -911,7 +909,7 @@ static bool iterate_directory(char *dir_to_iterate, bool root, int file_count)
int i;
/* Test directory iteration */
int fd = open(dir_to_iterate, O_RDONLY | O_DIRECTORY | O_CLOEXEC);
int fd = open(dir_to_iterate, O_RDONLY | O_DIRECTORY);
if (fd < 0) {
print_error("Can't open directory\n");
@ -1112,7 +1110,7 @@ static int basic_file_ops_test(char *mount_dir)
char *path = concat_file_name(mount_dir, file->name);
int fd;
fd = open(path, O_RDWR | O_CLOEXEC);
fd = open(path, O_RDWR);
free(path);
if (fd <= 0) {
print_error("Can't open file");
@ -1932,88 +1930,48 @@ failure:
return TEST_FAILURE;
}
enum expected_log { FULL_LOG, NO_LOG, PARTIAL_LOG };
static int validate_logs(char *mount_dir, int log_fd, struct test_file *file,
enum expected_log expected_log)
static int validate_logs(char *mount_dir, int log_fd, struct test_file *file)
{
uint8_t data[INCFS_DATA_FILE_BLOCK_SIZE];
struct incfs_pending_read_info prs[2048] = {};
struct incfs_pending_read_info prs[100] = {};
int prs_size = ARRAY_SIZE(prs);
int block_cnt = 1 + (file->size - 1) / INCFS_DATA_FILE_BLOCK_SIZE;
int expected_read_block_cnt;
int res;
int read_count;
int i, j;
int i;
char *filename = concat_file_name(mount_dir, file->name);
int fd;
fd = open(filename, O_RDONLY | O_CLOEXEC);
fd = open(filename, O_RDONLY);
free(filename);
if (fd <= 0)
return TEST_FAILURE;
if (block_cnt > prs_size)
block_cnt = prs_size;
expected_read_block_cnt = block_cnt;
for (i = 0; i < block_cnt; i++) {
res = pread(fd, data, sizeof(data),
INCFS_DATA_FILE_BLOCK_SIZE * i);
/* Make some read logs of type SAME_FILE_NEXT_BLOCK */
if (i % 10 == 0)
usleep(20000);
/* Skip some blocks to make logs of type SAME_FILE */
if (i % 10 == 5) {
++i;
--expected_read_block_cnt;
}
if (res <= 0)
goto failure;
}
read_count = wait_for_pending_reads(
log_fd, expected_log == NO_LOG ? 10 : 0, prs, prs_size);
if (expected_log == NO_LOG) {
if (read_count == 0)
goto success;
if (read_count < 0)
ksft_print_msg("Error reading logged reads %s.\n",
strerror(-read_count));
else
ksft_print_msg("Somehow read empty logs.\n");
goto failure;
}
read_count = wait_for_pending_reads(log_fd, 0, prs, prs_size);
if (read_count < 0) {
ksft_print_msg("Error reading logged reads %s.\n",
strerror(-read_count));
goto failure;
}
i = 0;
if (expected_log == PARTIAL_LOG) {
if (read_count == 0) {
ksft_print_msg("No logs %s.\n", file->name);
goto failure;
}
for (i = 0, j = 0; j < expected_read_block_cnt - read_count;
i++, j++)
if (i % 10 == 5)
++i;
} else if (read_count != expected_read_block_cnt) {
if (read_count != block_cnt) {
ksft_print_msg("Bad log read count %s %d %d.\n", file->name,
read_count, expected_read_block_cnt);
read_count, block_cnt);
goto failure;
}
for (j = 0; j < read_count; i++, j++) {
struct incfs_pending_read_info *read = &prs[j];
for (i = 0; i < read_count; i++) {
struct incfs_pending_read_info *read = &prs[i];
if (!same_id(&read->file_id, &file->id)) {
ksft_print_msg("Bad log read ino %s\n", file->name);
@ -2026,8 +1984,8 @@ static int validate_logs(char *mount_dir, int log_fd, struct test_file *file,
goto failure;
}
if (j != 0) {
unsigned long psn = prs[j - 1].serial_number;
if (i != 0) {
unsigned long psn = prs[i - 1].serial_number;
if (read->serial_number != psn + 1) {
ksft_print_msg("Bad log read sn %s %d %d.\n",
@ -2042,12 +2000,7 @@ static int validate_logs(char *mount_dir, int log_fd, struct test_file *file,
file->name);
goto failure;
}
if (i % 10 == 5)
++i;
}
success:
close(fd);
return TEST_SUCCESS;
@ -2061,14 +2014,14 @@ static int read_log_test(char *mount_dir)
struct test_files_set test = get_test_files_set();
const int file_num = test.files_count;
int i = 0;
int cmd_fd = -1, log_fd = -1, drop_caches = -1;
int cmd_fd = -1, log_fd = -1;
char *backing_dir;
backing_dir = create_backing_dir(mount_dir);
if (!backing_dir)
goto failure;
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0", false) != 0)
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0") != 0)
goto failure;
cmd_fd = open_commands_file(mount_dir);
@ -2076,7 +2029,7 @@ static int read_log_test(char *mount_dir)
goto failure;
log_fd = open_log_file(mount_dir);
if (log_fd < 0)
if (cmd_fd < 0)
ksft_print_msg("Can't open log file.\n");
/* Write data. */
@ -2095,7 +2048,7 @@ static int read_log_test(char *mount_dir)
for (i = 0; i < file_num; i++) {
struct test_file *file = &test.files[i];
if (validate_logs(mount_dir, log_fd, file, FULL_LOG))
if (validate_logs(mount_dir, log_fd, file))
goto failure;
}
@ -2108,7 +2061,7 @@ static int read_log_test(char *mount_dir)
goto failure;
}
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0", false) != 0)
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0") != 0)
goto failure;
cmd_fd = open_commands_file(mount_dir);
@ -2116,91 +2069,19 @@ static int read_log_test(char *mount_dir)
goto failure;
log_fd = open_log_file(mount_dir);
if (log_fd < 0)
if (cmd_fd < 0)
ksft_print_msg("Can't open log file.\n");
/* Validate data again */
for (i = 0; i < file_num; i++) {
struct test_file *file = &test.files[i];
if (validate_logs(mount_dir, log_fd, file, FULL_LOG))
goto failure;
}
/*
* Unmount and mount again with no read log to make sure poll
* doesn't crash
*/
close(cmd_fd);
close(log_fd);
if (umount(mount_dir) != 0) {
print_error("Can't unmout FS");
goto failure;
}
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0,rlog_pages=0",
false) != 0)
goto failure;
log_fd = open_log_file(mount_dir);
if (log_fd < 0)
ksft_print_msg("Can't open log file.\n");
/* Validate data again - note should fail this time */
for (i = 0; i < file_num; i++) {
struct test_file *file = &test.files[i];
if (validate_logs(mount_dir, log_fd, file, NO_LOG))
goto failure;
}
/*
* Remount and check that logs start working again
*/
drop_caches = open("/proc/sys/vm/drop_caches", O_WRONLY | O_CLOEXEC);
if (drop_caches == -1)
goto failure;
i = write(drop_caches, "3", 1);
close(drop_caches);
if (i != 1)
goto failure;
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0,rlog_pages=1",
true) != 0)
goto failure;
/* Validate data again */
for (i = 0; i < file_num; i++) {
struct test_file *file = &test.files[i];
if (validate_logs(mount_dir, log_fd, file, PARTIAL_LOG))
goto failure;
}
/*
* Remount and check that logs start working again
*/
drop_caches = open("/proc/sys/vm/drop_caches", O_WRONLY | O_CLOEXEC);
if (drop_caches == -1)
goto failure;
i = write(drop_caches, "3", 1);
close(drop_caches);
if (i != 1)
goto failure;
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0,rlog_pages=4",
true) != 0)
goto failure;
/* Validate data again */
for (i = 0; i < file_num; i++) {
struct test_file *file = &test.files[i];
if (validate_logs(mount_dir, log_fd, file, FULL_LOG))
if (validate_logs(mount_dir, log_fd, file))
goto failure;
}
/* Final unmount */
close(cmd_fd);
close(log_fd);
free(backing_dir);
if (umount(mount_dir) != 0) {
@ -2267,29 +2148,12 @@ static int validate_ranges(const char *mount_dir, struct test_file *file)
int error = TEST_SUCCESS;
int i;
int range_cnt;
int cmd_fd = -1;
struct incfs_permit_fill permit_fill;
fd = open(filename, O_RDONLY | O_CLOEXEC);
fd = open(filename, O_RDONLY);
free(filename);
if (fd <= 0)
return TEST_FAILURE;
error = ioctl(fd, INCFS_IOC_GET_FILLED_BLOCKS, &fba);
if (error != -1 || errno != EPERM) {
ksft_print_msg("INCFS_IOC_GET_FILLED_BLOCKS not blocked\n");
error = -EPERM;
goto out;
}
cmd_fd = open_commands_file(mount_dir);
permit_fill.file_descriptor = fd;
if (ioctl(cmd_fd, INCFS_IOC_PERMIT_FILL, &permit_fill)) {
print_error("INCFS_IOC_PERMIT_FILL failed");
return -EPERM;
goto out;
}
error = ioctl(fd, INCFS_IOC_GET_FILLED_BLOCKS, &fba);
if (error && errno != ERANGE)
goto out;
@ -2307,11 +2171,6 @@ static int validate_ranges(const char *mount_dir, struct test_file *file)
goto out;
}
if (fba.data_blocks_out != block_cnt) {
error = -EINVAL;
goto out;
}
range_cnt = (block_cnt + 3) / 4;
if (range_cnt > 128)
range_cnt = 128;
@ -2347,6 +2206,8 @@ static int validate_ranges(const char *mount_dir, struct test_file *file)
if (fba.start_index >= block_cnt) {
if (fba.index_out != fba.start_index) {
printf("Paul: %d, %d\n", (int)fba.index_out,
(int)fba.start_index);
error = -EINVAL;
goto out;
}
@ -2380,7 +2241,6 @@ static int validate_ranges(const char *mount_dir, struct test_file *file)
out:
close(fd);
close(cmd_fd);
return error;
}
@ -2396,7 +2256,7 @@ static int get_blocks_test(char *mount_dir)
if (!backing_dir)
goto failure;
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0", false) != 0)
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0") != 0)
goto failure;
cmd_fd = open_commands_file(mount_dir);
@ -2491,7 +2351,6 @@ failure:
static int validate_hash_ranges(const char *mount_dir, struct test_file *file)
{
int block_cnt = 1 + (file->size - 1) / INCFS_DATA_FILE_BLOCK_SIZE;
char *filename = concat_file_name(mount_dir, file->name);
int fd;
struct incfs_filled_range ranges[128];
@ -2502,46 +2361,19 @@ static int validate_hash_ranges(const char *mount_dir, struct test_file *file)
int error = TEST_SUCCESS;
int file_blocks = (file->size + INCFS_DATA_FILE_BLOCK_SIZE - 1) /
INCFS_DATA_FILE_BLOCK_SIZE;
int cmd_fd = -1;
struct incfs_permit_fill permit_fill;
if (file->size <= 4096 / 32 * 4096)
return 0;
fd = open(filename, O_RDONLY | O_CLOEXEC);
fd = open(filename, O_RDONLY);
free(filename);
if (fd <= 0)
return TEST_FAILURE;
error = ioctl(fd, INCFS_IOC_GET_FILLED_BLOCKS, &fba);
if (error != -1 || errno != EPERM) {
ksft_print_msg("INCFS_IOC_GET_FILLED_BLOCKS not blocked\n");
error = -EPERM;
goto out;
}
cmd_fd = open_commands_file(mount_dir);
permit_fill.file_descriptor = fd;
if (ioctl(cmd_fd, INCFS_IOC_PERMIT_FILL, &permit_fill)) {
print_error("INCFS_IOC_PERMIT_FILL failed");
return -EPERM;
goto out;
}
error = ioctl(fd, INCFS_IOC_GET_FILLED_BLOCKS, &fba);
if (error)
goto out;
if (fba.total_blocks_out <= block_cnt) {
error = -EINVAL;
goto out;
}
if (fba.data_blocks_out != block_cnt) {
error = -EINVAL;
goto out;
}
if (fba.range_buffer_size_out != sizeof(struct incfs_filled_range)) {
error = -EINVAL;
goto out;
@ -2554,7 +2386,6 @@ static int validate_hash_ranges(const char *mount_dir, struct test_file *file)
}
out:
close(cmd_fd);
close(fd);
return error;
}
@ -2571,7 +2402,7 @@ static int get_hash_blocks_test(char *mount_dir)
if (!backing_dir)
goto failure;
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0", false) != 0)
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0") != 0)
goto failure;
cmd_fd = open_commands_file(mount_dir);
@ -2609,65 +2440,6 @@ failure:
return TEST_FAILURE;
}
static int large_file(char *mount_dir)
{
char *backing_dir;
int cmd_fd = -1;
int i;
int result = TEST_FAILURE;
uint8_t data[INCFS_DATA_FILE_BLOCK_SIZE] = {};
int block_count = 3LL * 1024 * 1024 * 1024 / INCFS_DATA_FILE_BLOCK_SIZE;
struct incfs_fill_block *block_buf =
calloc(block_count, sizeof(struct incfs_fill_block));
struct incfs_fill_blocks fill_blocks = {
.count = block_count,
.fill_blocks = ptr_to_u64(block_buf),
};
incfs_uuid_t id;
int fd;
backing_dir = create_backing_dir(mount_dir);
if (!backing_dir)
goto failure;
if (mount_fs_opt(mount_dir, backing_dir, "readahead=0", false) != 0)
goto failure;
cmd_fd = open_commands_file(mount_dir);
if (cmd_fd < 0)
goto failure;
if (emit_file(cmd_fd, NULL, "very_large_file", &id,
(uint64_t)block_count * INCFS_DATA_FILE_BLOCK_SIZE,
NULL) < 0)
goto failure;
for (i = 0; i < block_count; i++) {
block_buf[i].compression = COMPRESSION_NONE;
block_buf[i].block_index = i;
block_buf[i].data_len = INCFS_DATA_FILE_BLOCK_SIZE;
block_buf[i].data = ptr_to_u64(data);
}
fd = open_file_by_id(mount_dir, id, true);
if (fd < 0)
goto failure;
if (ioctl(fd, INCFS_IOC_FILL_BLOCKS, &fill_blocks) != block_count)
goto failure;
if (emit_file(cmd_fd, NULL, "very_very_large_file", &id, 1LL << 40,
NULL) < 0)
goto failure;
result = TEST_SUCCESS;
failure:
close(fd);
close(cmd_fd);
return result;
}
static char *setup_mount_dir()
{
struct stat st;
@ -2702,7 +2474,7 @@ int main(int argc, char *argv[])
// NOTE - this abuses the concept of randomness - do *not* ever do this
// on a machine for production use - the device will think it has good
// randomness when it does not.
fd = open("/dev/urandom", O_WRONLY | O_CLOEXEC);
fd = open("/dev/urandom", O_WRONLY);
count = 4096;
for (int i = 0; i < 128; ++i)
ioctl(fd, RNDADDTOENTCNT, &count);
@ -2737,7 +2509,6 @@ int main(int argc, char *argv[])
MAKE_TEST(read_log_test),
MAKE_TEST(get_blocks_test),
MAKE_TEST(get_hash_blocks_test),
MAKE_TEST(large_file),
};
#undef MAKE_TEST
@ -2758,7 +2529,7 @@ int main(int argc, char *argv[])
rmdir(mount_dir);
if (fails > 0)
ksft_exit_fail();
ksft_exit_pass();
else
ksft_exit_pass();
return 0;

View file

@ -2,29 +2,27 @@
/*
* Copyright 2018 Google LLC
*/
#include <dirent.h>
#include <errno.h>
#include <fcntl.h>
#include <poll.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <dirent.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <sys/mount.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <errno.h>
#include <string.h>
#include <poll.h>
#include <openssl/bio.h>
#include <openssl/err.h>
#include <openssl/pem.h>
#include <openssl/pkcs7.h>
#include <openssl/sha.h>
#include <openssl/md5.h>
#include "utils.h"
#ifndef __S_IFREG
#define __S_IFREG S_IFREG
#endif
int mount_fs(const char *mount_dir, const char *backing_dir,
int read_timeout_ms)
{
@ -43,13 +41,12 @@ int mount_fs(const char *mount_dir, const char *backing_dir,
}
int mount_fs_opt(const char *mount_dir, const char *backing_dir,
const char *opt, bool remount)
const char *opt)
{
static const char fs_name[] = INCFS_NAME;
int result;
result = mount(backing_dir, mount_dir, fs_name,
remount ? MS_REMOUNT : 0, opt);
result = mount(backing_dir, mount_dir, fs_name, 0, opt);
if (result != 0)
perror("Error mounting fs.");
return result;
@ -186,7 +183,7 @@ int open_commands_file(const char *mount_dir)
snprintf(cmd_file, ARRAY_SIZE(cmd_file),
"%s/%s", mount_dir, INCFS_PENDING_READS_FILENAME);
cmd_fd = open(cmd_file, O_RDONLY | O_CLOEXEC);
cmd_fd = open(cmd_file, O_RDONLY);
if (cmd_fd < 0)
perror("Can't open commands file");
@ -199,7 +196,7 @@ int open_log_file(const char *mount_dir)
int cmd_fd;
snprintf(cmd_file, ARRAY_SIZE(cmd_file), "%s/.log", mount_dir);
cmd_fd = open(cmd_file, O_RDWR | O_CLOEXEC);
cmd_fd = open(cmd_file, O_RDWR);
if (cmd_fd < 0)
perror("Can't open log file");
return cmd_fd;

View file

@ -5,7 +5,7 @@
#include <stdbool.h>
#include <sys/stat.h>
#include <include/uapi/linux/incrementalfs.h>
#include "../../include/uapi/linux/incrementalfs.h"
#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
@ -23,7 +23,7 @@ int mount_fs(const char *mount_dir, const char *backing_dir,
int read_timeout_ms);
int mount_fs_opt(const char *mount_dir, const char *backing_dir,
const char *opt, bool remount);
const char *opt);
int get_file_bmap(int cmd_fd, int ino, unsigned char *buf, int buf_size);