This is the 4.19.193 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmC4eNoACgkQONu9yGCS aT6liA/7BGTnGt5E20VY+gPS9ydo1LukuZ7ZT8rXoedz8ZJZE5g4vVYyUJT2yxL0 PabDMEbMjWLs26DTGnTL9orgrzENvTJyVUNQ504CQxh8jTfb1Ogti1Zc4JTKyB+m 3IdSLEdrBKasR5jsEpxtgGz5xZWzB/QX/MXX2myGAVZglFpPIxnmwdsZWNkLGyPx ofoBEeGypeJqpijvn79mX4LFZW8vzLzOGE+Z24Hg9XxALhd+o4pnw4d/oE6JwYIY gOOIUUxoX9O3dy7b26xezqfmiDPlPzxbYTEL2lPnW+HYXhFl9PmqRS6vlpz6mhAO c8EIWvxcemx7Kc1JOe3b9jZc04iihY0IBtGylAtuT7sAE4RZnlRcPkl+2W+L77Tx BuAxTtMhxKWsr22rw+0hYxIdFgsQApit+qOjmVMecu20IYHyWZwuZGBbdA83SSZk 0HSqgTkTsmZ6dgA+BocdmZ0r8zYfgUSAxZn2xmGCeEIcDeynbzJrvltpimDOvfjk 7vdmGzOIXsCWlpJ9VPMar2T19PtDezE6gdw7x6B0Z04c5WT9pFkjFB9UAtC2A7mm hGjeFQJ1WNUoLxz9b53t4Ii8uHDvseu9jHzD2JL1DR+oBNRBUoPV2SklScmwDk3y 5sAlJtRFNet3NMfyIdWb167FN+hv4Kl5bA0uolRsMTBtfHZ6Sjw= =rpdS -----END PGP SIGNATURE----- Merge 4.19.193 into android-4.19-stable Changes in 4.19.193 mm, vmstat: drop zone->lock in /proc/pagetypeinfo usb: dwc3: gadget: Enable suspend events NFC: nci: fix memory leak in nci_allocate_device cifs: set server->cipher_type to AES-128-CCM for SMB3.0 NFSv4: Fix a NULL pointer dereference in pnfs_mark_matching_lsegs_return() iommu/vt-d: Fix sysfs leak in alloc_iommu() perf intel-pt: Fix sample instruction bytes perf intel-pt: Fix transaction abort handling proc: Check /proc/$pid/attr/ writes against file opener net: hso: fix control-request directions mac80211: assure all fragments are encrypted mac80211: prevent mixed key and fragment cache attacks mac80211: properly handle A-MSDUs that start with an RFC 1042 header cfg80211: mitigate A-MSDU aggregation attacks mac80211: drop A-MSDUs on old ciphers mac80211: add fragment cache to sta_info mac80211: check defrag PN against current frame mac80211: prevent attacks on TKIP/WEP as well mac80211: do not accept/forward invalid EAPOL frames mac80211: extend protection against mixed key and fragment cache attacks ath10k: Validate first subframe of A-MSDU before processing the list dm snapshot: properly fix a crash when an origin has no snapshots kgdb: fix gcc-11 warnings harder misc/uss720: fix memory leak in uss720_probe thunderbolt: dma_port: Fix NVM read buffer bounds and offset issue mei: request autosuspend after sending rx flow control staging: iio: cdc: ad7746: avoid overwrite of num_channels iio: adc: ad7793: Add missing error code in ad7793_setup() USB: trancevibrator: fix control-request direction USB: usbfs: Don't WARN about excessively large memory allocations serial: sh-sci: Fix off-by-one error in FIFO threshold register setting serial: rp2: use 'request_firmware' instead of 'request_firmware_nowait' USB: serial: ti_usb_3410_5052: add startech.com device id USB: serial: option: add Telit LE910-S1 compositions 0x7010, 0x7011 USB: serial: ftdi_sio: add IDs for IDS GmbH Products USB: serial: pl2303: add device id for ADLINK ND-6530 GC usb: dwc3: gadget: Properly track pending and queued SG usb: gadget: udc: renesas_usb3: Fix a race in usb3_start_pipen() net: usb: fix memory leak in smsc75xx_bind bpf: fix up selftests after backports were fixed bpf, selftests: Fix up some test_verifier cases for unprivileged selftests/bpf: Test narrow loads with off > 0 in test_verifier selftests/bpf: add selftest part of "bpf: improve verifier branch analysis" bpf: extend is_branch_taken to registers bpf: Test_verifier, bpf_get_stack return value add <0 bpf, test_verifier: switch bpf_get_stack's 0 s> r8 test bpf: Move off_reg into sanitize_ptr_alu bpf: Ensure off_reg has no mixed signed bounds for all types bpf: Rework ptr_limit into alu_limit and add common error path bpf: Improve verifier error messages for users bpf: Refactor and streamline bounds check into helper bpf: Move sanitize_val_alu out of op switch bpf: Tighten speculative pointer arithmetic mask bpf: Update selftests to reflect new error states bpf: Fix leakage of uninitialized bpf stack under speculation bpf: Wrap aux data inside bpf_sanitize_info container bpf: Fix mask direction swap upon off reg sign change bpf: No need to simulate speculative domain for immediates spi: gpio: Don't leak SPI master in probe error path spi: mt7621: Disable clock in probe error path spi: mt7621: Don't leak SPI master in probe error path Bluetooth: cmtp: fix file refcount when cmtp_attach_device fails NFS: fix an incorrect limit in filelayout_decode_layout() NFS: Don't corrupt the value of pg_bytes_written in nfs_do_recoalesce() NFSv4: Fix v4.0/v4.1 SEEK_DATA return -ENOTSUPP when set NFS_V4_2 config drm/meson: fix shutdown crash when component not probed net/mlx4: Fix EEPROM dump support Revert "net:tipc: Fix a double free in tipc_sk_mcast_rcv" tipc: skb_linearize the head skb when reassembling msgs net: dsa: mt7530: fix VLAN traffic leaks net: dsa: fix a crash if ->get_sset_count() fails i2c: s3c2410: fix possible NULL pointer deref on read message after write i2c: i801: Don't generate an interrupt on bus reset perf jevents: Fix getting maximum number of fds platform/x86: hp_accel: Avoid invoking _INI to speed up resume serial: max310x: unregister uart driver in case of failure and abort net: fujitsu: fix potential null-ptr-deref net: caif: remove BUG_ON(dev == NULL) in caif_xmit char: hpet: add checks after calling ioremap isdn: mISDNinfineon: check/cleanup ioremap failure correctly in setup_io dmaengine: qcom_hidma: comment platform_driver_register call libertas: register sysfs groups properly ASoC: cs43130: handle errors in cs43130_probe() properly media: dvb: Add check on sp8870_readreg return media: gspca: properly check for errors in po1030_probe() scsi: BusLogic: Fix 64-bit system enumeration error for Buslogic openrisc: Define memory barrier mb btrfs: do not BUG_ON in link_to_fixup_dir platform/x86: hp-wireless: add AMD's hardware id to the supported list platform/x86: intel_punit_ipc: Append MODULE_DEVICE_TABLE for ACPI SMB3: incorrect file id in requests compounded with open drm/amd/display: Disconnect non-DP with no EDID drm/amd/amdgpu: fix refcount leak drm/amdgpu: Fix a use-after-free net: netcp: Fix an error message net: dsa: fix error code getting shifted with 4 in dsa_slave_get_sset_count net: fec: fix the potential memory leak in fec_enet_init() net: mdio: thunder: Fix a double free issue in the .remove function net: mdio: octeon: Fix some double free issues openvswitch: meter: fix race when getting now_ms. net: bnx2: Fix error return code in bnx2_init_board() mld: fix panic in mld_newpack() staging: emxx_udc: fix loop in _nbu2ss_nuke() ASoC: cs35l33: fix an error code in probe() bpf: Set mac_len in bpf_skb_change_head ixgbe: fix large MTU request from VF scsi: libsas: Use _safe() loop in sas_resume_port() ipv6: record frag_max_size in atomic fragments in input path sch_dsmark: fix a NULL deref in qdisc_reset() MIPS: alchemy: xxs1500: add gpio-au1000.h header file MIPS: ralink: export rt_sysc_membase for rt2880_wdt.c hugetlbfs: hugetlb_fault_mutex_hash() cleanup drivers/net/ethernet: clean up unused assignments net: hns3: check the return of skb_checksum_help() usb: core: reduce power-on-good delay time of root hub Linux 4.19.193 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I0cac54fabadef0bc664d6958b1c8fcda4078ba4b
This commit is contained in:
commit
ea6ea821c5
119 changed files with 1049 additions and 518 deletions
2
Makefile
2
Makefile
|
@ -1,7 +1,7 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 4
|
VERSION = 4
|
||||||
PATCHLEVEL = 19
|
PATCHLEVEL = 19
|
||||||
SUBLEVEL = 192
|
SUBLEVEL = 193
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = "People's Front"
|
NAME = "People's Front"
|
||||||
|
|
||||||
|
|
|
@ -31,6 +31,7 @@
|
||||||
#include <asm/reboot.h>
|
#include <asm/reboot.h>
|
||||||
#include <asm/setup.h>
|
#include <asm/setup.h>
|
||||||
#include <asm/mach-au1x00/au1000.h>
|
#include <asm/mach-au1x00/au1000.h>
|
||||||
|
#include <asm/mach-au1x00/gpio-au1000.h>
|
||||||
#include <prom.h>
|
#include <prom.h>
|
||||||
|
|
||||||
const char *get_system_type(void)
|
const char *get_system_type(void)
|
||||||
|
|
|
@ -10,6 +10,7 @@
|
||||||
|
|
||||||
#include <linux/io.h>
|
#include <linux/io.h>
|
||||||
#include <linux/clk.h>
|
#include <linux/clk.h>
|
||||||
|
#include <linux/export.h>
|
||||||
#include <linux/init.h>
|
#include <linux/init.h>
|
||||||
#include <linux/sizes.h>
|
#include <linux/sizes.h>
|
||||||
#include <linux/of_fdt.h>
|
#include <linux/of_fdt.h>
|
||||||
|
@ -27,6 +28,7 @@
|
||||||
|
|
||||||
__iomem void *rt_sysc_membase;
|
__iomem void *rt_sysc_membase;
|
||||||
__iomem void *rt_memc_membase;
|
__iomem void *rt_memc_membase;
|
||||||
|
EXPORT_SYMBOL_GPL(rt_sysc_membase);
|
||||||
|
|
||||||
__iomem void *plat_of_remap_node(const char *node)
|
__iomem void *plat_of_remap_node(const char *node)
|
||||||
{
|
{
|
||||||
|
|
9
arch/openrisc/include/asm/barrier.h
Normal file
9
arch/openrisc/include/asm/barrier.h
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
/* SPDX-License-Identifier: GPL-2.0 */
|
||||||
|
#ifndef __ASM_BARRIER_H
|
||||||
|
#define __ASM_BARRIER_H
|
||||||
|
|
||||||
|
#define mb() asm volatile ("l.msync" ::: "memory")
|
||||||
|
|
||||||
|
#include <asm-generic/barrier.h>
|
||||||
|
|
||||||
|
#endif /* __ASM_BARRIER_H */
|
|
@ -975,6 +975,8 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
|
||||||
if (ACPI_SUCCESS(status)) {
|
if (ACPI_SUCCESS(status)) {
|
||||||
hdp->hd_phys_address = addr.address.minimum;
|
hdp->hd_phys_address = addr.address.minimum;
|
||||||
hdp->hd_address = ioremap(addr.address.minimum, addr.address.address_length);
|
hdp->hd_address = ioremap(addr.address.minimum, addr.address.address_length);
|
||||||
|
if (!hdp->hd_address)
|
||||||
|
return AE_ERROR;
|
||||||
|
|
||||||
if (hpet_is_known(hdp)) {
|
if (hpet_is_known(hdp)) {
|
||||||
iounmap(hdp->hd_address);
|
iounmap(hdp->hd_address);
|
||||||
|
@ -988,6 +990,8 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
|
||||||
hdp->hd_phys_address = fixmem32->address;
|
hdp->hd_phys_address = fixmem32->address;
|
||||||
hdp->hd_address = ioremap(fixmem32->address,
|
hdp->hd_address = ioremap(fixmem32->address,
|
||||||
HPET_RANGE_SIZE);
|
HPET_RANGE_SIZE);
|
||||||
|
if (!hdp->hd_address)
|
||||||
|
return AE_ERROR;
|
||||||
|
|
||||||
if (hpet_is_known(hdp)) {
|
if (hpet_is_known(hdp)) {
|
||||||
iounmap(hdp->hd_address);
|
iounmap(hdp->hd_address);
|
||||||
|
|
|
@ -423,6 +423,20 @@ static int __init hidma_mgmt_init(void)
|
||||||
hidma_mgmt_of_populate_channels(child);
|
hidma_mgmt_of_populate_channels(child);
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
/*
|
||||||
|
* We do not check for return value here, as it is assumed that
|
||||||
|
* platform_driver_register must not fail. The reason for this is that
|
||||||
|
* the (potential) hidma_mgmt_of_populate_channels calls above are not
|
||||||
|
* cleaned up if it does fail, and to do this work is quite
|
||||||
|
* complicated. In particular, various calls of of_address_to_resource,
|
||||||
|
* of_irq_to_resource, platform_device_register_full, of_dma_configure,
|
||||||
|
* and of_msi_configure which then call other functions and so on, must
|
||||||
|
* be cleaned up - this is not a trivial exercise.
|
||||||
|
*
|
||||||
|
* Currently, this module is not intended to be unloaded, and there is
|
||||||
|
* no module_exit function defined which does the needed cleanup. For
|
||||||
|
* this reason, we have to assume success here.
|
||||||
|
*/
|
||||||
platform_driver_register(&hidma_mgmt_driver);
|
platform_driver_register(&hidma_mgmt_driver);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -297,10 +297,13 @@ out:
|
||||||
static int amdgpu_fbdev_destroy(struct drm_device *dev, struct amdgpu_fbdev *rfbdev)
|
static int amdgpu_fbdev_destroy(struct drm_device *dev, struct amdgpu_fbdev *rfbdev)
|
||||||
{
|
{
|
||||||
struct amdgpu_framebuffer *rfb = &rfbdev->rfb;
|
struct amdgpu_framebuffer *rfb = &rfbdev->rfb;
|
||||||
|
int i;
|
||||||
|
|
||||||
drm_fb_helper_unregister_fbi(&rfbdev->helper);
|
drm_fb_helper_unregister_fbi(&rfbdev->helper);
|
||||||
|
|
||||||
if (rfb->base.obj[0]) {
|
if (rfb->base.obj[0]) {
|
||||||
|
for (i = 0; i < rfb->base.format->num_planes; i++)
|
||||||
|
drm_gem_object_put(rfb->base.obj[0]);
|
||||||
amdgpufb_destroy_pinned_object(rfb->base.obj[0]);
|
amdgpufb_destroy_pinned_object(rfb->base.obj[0]);
|
||||||
rfb->base.obj[0] = NULL;
|
rfb->base.obj[0] = NULL;
|
||||||
drm_framebuffer_unregister_private(&rfb->base);
|
drm_framebuffer_unregister_private(&rfb->base);
|
||||||
|
|
|
@ -1277,6 +1277,7 @@ static void amdgpu_ttm_tt_unpopulate(struct ttm_tt *ttm)
|
||||||
if (gtt && gtt->userptr) {
|
if (gtt && gtt->userptr) {
|
||||||
amdgpu_ttm_tt_set_user_pages(ttm, NULL);
|
amdgpu_ttm_tt_set_user_pages(ttm, NULL);
|
||||||
kfree(ttm->sg);
|
kfree(ttm->sg);
|
||||||
|
ttm->sg = NULL;
|
||||||
ttm->page_flags &= ~TTM_PAGE_FLAG_SG;
|
ttm->page_flags &= ~TTM_PAGE_FLAG_SG;
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
|
@ -768,6 +768,24 @@ bool dc_link_detect(struct dc_link *link, enum dc_detect_reason reason)
|
||||||
dc_is_dvi_signal(link->connector_signal)) {
|
dc_is_dvi_signal(link->connector_signal)) {
|
||||||
if (prev_sink != NULL)
|
if (prev_sink != NULL)
|
||||||
dc_sink_release(prev_sink);
|
dc_sink_release(prev_sink);
|
||||||
|
link_disconnect_sink(link);
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
/*
|
||||||
|
* Abort detection for DP connectors if we have
|
||||||
|
* no EDID and connector is active converter
|
||||||
|
* as there are no display downstream
|
||||||
|
*
|
||||||
|
*/
|
||||||
|
if (dc_is_dp_sst_signal(link->connector_signal) &&
|
||||||
|
(link->dpcd_caps.dongle_type ==
|
||||||
|
DISPLAY_DONGLE_DP_VGA_CONVERTER ||
|
||||||
|
link->dpcd_caps.dongle_type ==
|
||||||
|
DISPLAY_DONGLE_DP_DVI_CONVERTER)) {
|
||||||
|
if (prev_sink)
|
||||||
|
dc_sink_release(prev_sink);
|
||||||
|
link_disconnect_sink(link);
|
||||||
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
|
@ -387,11 +387,12 @@ static int meson_probe_remote(struct platform_device *pdev,
|
||||||
static void meson_drv_shutdown(struct platform_device *pdev)
|
static void meson_drv_shutdown(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct meson_drm *priv = dev_get_drvdata(&pdev->dev);
|
struct meson_drm *priv = dev_get_drvdata(&pdev->dev);
|
||||||
struct drm_device *drm = priv->drm;
|
|
||||||
|
|
||||||
DRM_DEBUG_DRIVER("\n");
|
if (!priv)
|
||||||
drm_kms_helper_poll_fini(drm);
|
return;
|
||||||
drm_atomic_helper_shutdown(drm);
|
|
||||||
|
drm_kms_helper_poll_fini(priv->drm);
|
||||||
|
drm_atomic_helper_shutdown(priv->drm);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int meson_drv_probe(struct platform_device *pdev)
|
static int meson_drv_probe(struct platform_device *pdev)
|
||||||
|
|
|
@ -384,11 +384,9 @@ static int i801_check_post(struct i801_priv *priv, int status)
|
||||||
dev_err(&priv->pci_dev->dev, "Transaction timeout\n");
|
dev_err(&priv->pci_dev->dev, "Transaction timeout\n");
|
||||||
/* try to stop the current command */
|
/* try to stop the current command */
|
||||||
dev_dbg(&priv->pci_dev->dev, "Terminating the current operation\n");
|
dev_dbg(&priv->pci_dev->dev, "Terminating the current operation\n");
|
||||||
outb_p(inb_p(SMBHSTCNT(priv)) | SMBHSTCNT_KILL,
|
outb_p(SMBHSTCNT_KILL, SMBHSTCNT(priv));
|
||||||
SMBHSTCNT(priv));
|
|
||||||
usleep_range(1000, 2000);
|
usleep_range(1000, 2000);
|
||||||
outb_p(inb_p(SMBHSTCNT(priv)) & (~SMBHSTCNT_KILL),
|
outb_p(0, SMBHSTCNT(priv));
|
||||||
SMBHSTCNT(priv));
|
|
||||||
|
|
||||||
/* Check if it worked */
|
/* Check if it worked */
|
||||||
status = inb_p(SMBHSTSTS(priv));
|
status = inb_p(SMBHSTSTS(priv));
|
||||||
|
|
|
@ -493,7 +493,10 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
|
||||||
* forces us to send a new START
|
* forces us to send a new START
|
||||||
* when we change direction
|
* when we change direction
|
||||||
*/
|
*/
|
||||||
|
dev_dbg(i2c->dev,
|
||||||
|
"missing START before write->read\n");
|
||||||
s3c24xx_i2c_stop(i2c, -EINVAL);
|
s3c24xx_i2c_stop(i2c, -EINVAL);
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
goto retry_write;
|
goto retry_write;
|
||||||
|
|
|
@ -279,6 +279,7 @@ static int ad7793_setup(struct iio_dev *indio_dev,
|
||||||
id &= AD7793_ID_MASK;
|
id &= AD7793_ID_MASK;
|
||||||
|
|
||||||
if (id != st->chip_info->id) {
|
if (id != st->chip_info->id) {
|
||||||
|
ret = -ENODEV;
|
||||||
dev_err(&st->sd.spi->dev, "device ID query failed\n");
|
dev_err(&st->sd.spi->dev, "device ID query failed\n");
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1119,7 +1119,7 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
|
||||||
|
|
||||||
err = iommu_device_register(&iommu->iommu);
|
err = iommu_device_register(&iommu->iommu);
|
||||||
if (err)
|
if (err)
|
||||||
goto err_unmap;
|
goto err_sysfs;
|
||||||
}
|
}
|
||||||
|
|
||||||
drhd->iommu = iommu;
|
drhd->iommu = iommu;
|
||||||
|
@ -1127,6 +1127,8 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
err_sysfs:
|
||||||
|
iommu_device_sysfs_remove(&iommu->iommu);
|
||||||
err_unmap:
|
err_unmap:
|
||||||
unmap_iommu(iommu);
|
unmap_iommu(iommu);
|
||||||
error_free_seq_id:
|
error_free_seq_id:
|
||||||
|
|
|
@ -645,17 +645,19 @@ static void
|
||||||
release_io(struct inf_hw *hw)
|
release_io(struct inf_hw *hw)
|
||||||
{
|
{
|
||||||
if (hw->cfg.mode) {
|
if (hw->cfg.mode) {
|
||||||
if (hw->cfg.p) {
|
if (hw->cfg.mode == AM_MEMIO) {
|
||||||
release_mem_region(hw->cfg.start, hw->cfg.size);
|
release_mem_region(hw->cfg.start, hw->cfg.size);
|
||||||
iounmap(hw->cfg.p);
|
if (hw->cfg.p)
|
||||||
|
iounmap(hw->cfg.p);
|
||||||
} else
|
} else
|
||||||
release_region(hw->cfg.start, hw->cfg.size);
|
release_region(hw->cfg.start, hw->cfg.size);
|
||||||
hw->cfg.mode = AM_NONE;
|
hw->cfg.mode = AM_NONE;
|
||||||
}
|
}
|
||||||
if (hw->addr.mode) {
|
if (hw->addr.mode) {
|
||||||
if (hw->addr.p) {
|
if (hw->addr.mode == AM_MEMIO) {
|
||||||
release_mem_region(hw->addr.start, hw->addr.size);
|
release_mem_region(hw->addr.start, hw->addr.size);
|
||||||
iounmap(hw->addr.p);
|
if (hw->addr.p)
|
||||||
|
iounmap(hw->addr.p);
|
||||||
} else
|
} else
|
||||||
release_region(hw->addr.start, hw->addr.size);
|
release_region(hw->addr.start, hw->addr.size);
|
||||||
hw->addr.mode = AM_NONE;
|
hw->addr.mode = AM_NONE;
|
||||||
|
@ -685,9 +687,12 @@ setup_io(struct inf_hw *hw)
|
||||||
(ulong)hw->cfg.start, (ulong)hw->cfg.size);
|
(ulong)hw->cfg.start, (ulong)hw->cfg.size);
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
if (hw->ci->cfg_mode == AM_MEMIO)
|
|
||||||
hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size);
|
|
||||||
hw->cfg.mode = hw->ci->cfg_mode;
|
hw->cfg.mode = hw->ci->cfg_mode;
|
||||||
|
if (hw->ci->cfg_mode == AM_MEMIO) {
|
||||||
|
hw->cfg.p = ioremap(hw->cfg.start, hw->cfg.size);
|
||||||
|
if (!hw->cfg.p)
|
||||||
|
return -ENOMEM;
|
||||||
|
}
|
||||||
if (debug & DEBUG_HW)
|
if (debug & DEBUG_HW)
|
||||||
pr_notice("%s: IO cfg %lx (%lu bytes) mode%d\n",
|
pr_notice("%s: IO cfg %lx (%lu bytes) mode%d\n",
|
||||||
hw->name, (ulong)hw->cfg.start,
|
hw->name, (ulong)hw->cfg.start,
|
||||||
|
@ -712,9 +717,12 @@ setup_io(struct inf_hw *hw)
|
||||||
(ulong)hw->addr.start, (ulong)hw->addr.size);
|
(ulong)hw->addr.start, (ulong)hw->addr.size);
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
if (hw->ci->addr_mode == AM_MEMIO)
|
|
||||||
hw->addr.p = ioremap(hw->addr.start, hw->addr.size);
|
|
||||||
hw->addr.mode = hw->ci->addr_mode;
|
hw->addr.mode = hw->ci->addr_mode;
|
||||||
|
if (hw->ci->addr_mode == AM_MEMIO) {
|
||||||
|
hw->addr.p = ioremap(hw->addr.start, hw->addr.size);
|
||||||
|
if (!hw->addr.p)
|
||||||
|
return -ENOMEM;
|
||||||
|
}
|
||||||
if (debug & DEBUG_HW)
|
if (debug & DEBUG_HW)
|
||||||
pr_notice("%s: IO addr %lx (%lu bytes) mode%d\n",
|
pr_notice("%s: IO addr %lx (%lu bytes) mode%d\n",
|
||||||
hw->name, (ulong)hw->addr.start,
|
hw->name, (ulong)hw->addr.start,
|
||||||
|
|
|
@ -794,7 +794,7 @@ static int dm_add_exception(void *context, chunk_t old, chunk_t new)
|
||||||
static uint32_t __minimum_chunk_size(struct origin *o)
|
static uint32_t __minimum_chunk_size(struct origin *o)
|
||||||
{
|
{
|
||||||
struct dm_snapshot *snap;
|
struct dm_snapshot *snap;
|
||||||
unsigned chunk_size = 0;
|
unsigned chunk_size = rounddown_pow_of_two(UINT_MAX);
|
||||||
|
|
||||||
if (o)
|
if (o)
|
||||||
list_for_each_entry(snap, &o->snapshots, list)
|
list_for_each_entry(snap, &o->snapshots, list)
|
||||||
|
|
|
@ -293,7 +293,9 @@ static int sp8870_set_frontend_parameters(struct dvb_frontend *fe)
|
||||||
sp8870_writereg(state, 0xc05, reg0xc05);
|
sp8870_writereg(state, 0xc05, reg0xc05);
|
||||||
|
|
||||||
// read status reg in order to clear pending irqs
|
// read status reg in order to clear pending irqs
|
||||||
sp8870_readreg(state, 0x200);
|
err = sp8870_readreg(state, 0x200);
|
||||||
|
if (err < 0)
|
||||||
|
return err;
|
||||||
|
|
||||||
// system controller start
|
// system controller start
|
||||||
sp8870_microcontroller_start(state);
|
sp8870_microcontroller_start(state);
|
||||||
|
|
|
@ -159,6 +159,7 @@ static const struct v4l2_ctrl_config po1030_greenbal_cfg = {
|
||||||
int po1030_probe(struct sd *sd)
|
int po1030_probe(struct sd *sd)
|
||||||
{
|
{
|
||||||
u8 dev_id_h = 0, i;
|
u8 dev_id_h = 0, i;
|
||||||
|
int err;
|
||||||
struct gspca_dev *gspca_dev = (struct gspca_dev *)sd;
|
struct gspca_dev *gspca_dev = (struct gspca_dev *)sd;
|
||||||
|
|
||||||
if (force_sensor) {
|
if (force_sensor) {
|
||||||
|
@ -177,10 +178,13 @@ int po1030_probe(struct sd *sd)
|
||||||
for (i = 0; i < ARRAY_SIZE(preinit_po1030); i++) {
|
for (i = 0; i < ARRAY_SIZE(preinit_po1030); i++) {
|
||||||
u8 data = preinit_po1030[i][2];
|
u8 data = preinit_po1030[i][2];
|
||||||
if (preinit_po1030[i][0] == SENSOR)
|
if (preinit_po1030[i][0] == SENSOR)
|
||||||
m5602_write_sensor(sd,
|
err = m5602_write_sensor(sd, preinit_po1030[i][1],
|
||||||
preinit_po1030[i][1], &data, 1);
|
&data, 1);
|
||||||
else
|
else
|
||||||
m5602_write_bridge(sd, preinit_po1030[i][1], data);
|
err = m5602_write_bridge(sd, preinit_po1030[i][1],
|
||||||
|
data);
|
||||||
|
if (err < 0)
|
||||||
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (m5602_read_sensor(sd, PO1030_DEVID_H, &dev_id_h, 1))
|
if (m5602_read_sensor(sd, PO1030_DEVID_H, &dev_id_h, 1))
|
||||||
|
|
|
@ -112,8 +112,9 @@
|
||||||
printk(KERN_INFO a); \
|
printk(KERN_INFO a); \
|
||||||
} while (0)
|
} while (0)
|
||||||
#define v2printk(a...) do { \
|
#define v2printk(a...) do { \
|
||||||
if (verbose > 1) \
|
if (verbose > 1) { \
|
||||||
printk(KERN_INFO a); \
|
printk(KERN_INFO a); \
|
||||||
|
} \
|
||||||
touch_nmi_watchdog(); \
|
touch_nmi_watchdog(); \
|
||||||
} while (0)
|
} while (0)
|
||||||
#define eprintk(a...) do { \
|
#define eprintk(a...) do { \
|
||||||
|
|
|
@ -284,6 +284,7 @@ struct lis3lv02d {
|
||||||
int regs_size;
|
int regs_size;
|
||||||
u8 *reg_cache;
|
u8 *reg_cache;
|
||||||
bool regs_stored;
|
bool regs_stored;
|
||||||
|
bool init_required;
|
||||||
u8 odr_mask; /* ODR bit mask */
|
u8 odr_mask; /* ODR bit mask */
|
||||||
u8 whoami; /* indicates measurement precision */
|
u8 whoami; /* indicates measurement precision */
|
||||||
s16 (*read_data) (struct lis3lv02d *lis3, int reg);
|
s16 (*read_data) (struct lis3lv02d *lis3, int reg);
|
||||||
|
|
|
@ -224,6 +224,9 @@ static int mei_cl_irq_read(struct mei_cl *cl, struct mei_cl_cb *cb,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pm_runtime_mark_last_busy(dev->dev);
|
||||||
|
pm_request_autosuspend(dev->dev);
|
||||||
|
|
||||||
list_move_tail(&cb->list, &cl->rd_pending);
|
list_move_tail(&cb->list, &cl->rd_pending);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -279,7 +279,6 @@ static int caif_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||||
{
|
{
|
||||||
struct ser_device *ser;
|
struct ser_device *ser;
|
||||||
|
|
||||||
BUG_ON(dev == NULL);
|
|
||||||
ser = netdev_priv(dev);
|
ser = netdev_priv(dev);
|
||||||
|
|
||||||
/* Send flow off once, on high water mark */
|
/* Send flow off once, on high water mark */
|
||||||
|
|
|
@ -851,14 +851,6 @@ mt7530_port_set_vlan_aware(struct dsa_switch *ds, int port)
|
||||||
{
|
{
|
||||||
struct mt7530_priv *priv = ds->priv;
|
struct mt7530_priv *priv = ds->priv;
|
||||||
|
|
||||||
/* The real fabric path would be decided on the membership in the
|
|
||||||
* entry of VLAN table. PCR_MATRIX set up here with ALL_MEMBERS
|
|
||||||
* means potential VLAN can be consisting of certain subset of all
|
|
||||||
* ports.
|
|
||||||
*/
|
|
||||||
mt7530_rmw(priv, MT7530_PCR_P(port),
|
|
||||||
PCR_MATRIX_MASK, PCR_MATRIX(MT7530_ALL_MEMBERS));
|
|
||||||
|
|
||||||
/* Trapped into security mode allows packet forwarding through VLAN
|
/* Trapped into security mode allows packet forwarding through VLAN
|
||||||
* table lookup. CPU port is set to fallback mode to let untagged
|
* table lookup. CPU port is set to fallback mode to let untagged
|
||||||
* frames pass through.
|
* frames pass through.
|
||||||
|
|
|
@ -8253,9 +8253,9 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
|
||||||
BNX2_WR(bp, PCI_COMMAND, reg);
|
BNX2_WR(bp, PCI_COMMAND, reg);
|
||||||
} else if ((BNX2_CHIP_ID(bp) == BNX2_CHIP_ID_5706_A1) &&
|
} else if ((BNX2_CHIP_ID(bp) == BNX2_CHIP_ID_5706_A1) &&
|
||||||
!(bp->flags & BNX2_FLAG_PCIX)) {
|
!(bp->flags & BNX2_FLAG_PCIX)) {
|
||||||
|
|
||||||
dev_err(&pdev->dev,
|
dev_err(&pdev->dev,
|
||||||
"5706 A1 can only be used in a PCIX bus, aborting\n");
|
"5706 A1 can only be used in a PCIX bus, aborting\n");
|
||||||
|
rc = -EPERM;
|
||||||
goto err_out_unmap;
|
goto err_out_unmap;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -3290,7 +3290,7 @@ bnad_change_mtu(struct net_device *netdev, int new_mtu)
|
||||||
{
|
{
|
||||||
int err, mtu;
|
int err, mtu;
|
||||||
struct bnad *bnad = netdev_priv(netdev);
|
struct bnad *bnad = netdev_priv(netdev);
|
||||||
u32 rx_count = 0, frame, new_frame;
|
u32 frame, new_frame;
|
||||||
|
|
||||||
mutex_lock(&bnad->conf_mutex);
|
mutex_lock(&bnad->conf_mutex);
|
||||||
|
|
||||||
|
@ -3306,12 +3306,9 @@ bnad_change_mtu(struct net_device *netdev, int new_mtu)
|
||||||
/* only when transition is over 4K */
|
/* only when transition is over 4K */
|
||||||
if ((frame <= 4096 && new_frame > 4096) ||
|
if ((frame <= 4096 && new_frame > 4096) ||
|
||||||
(frame > 4096 && new_frame <= 4096))
|
(frame > 4096 && new_frame <= 4096))
|
||||||
rx_count = bnad_reinit_rx(bnad);
|
bnad_reinit_rx(bnad);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* rx_count > 0 - new rx created
|
|
||||||
* - Linux set err = 0 and return
|
|
||||||
*/
|
|
||||||
err = bnad_mtu_set(bnad, new_frame);
|
err = bnad_mtu_set(bnad, new_frame);
|
||||||
if (err)
|
if (err)
|
||||||
err = -EBUSY;
|
err = -EBUSY;
|
||||||
|
|
|
@ -4927,11 +4927,11 @@ mii_get_oui(u_char phyaddr, u_long ioaddr)
|
||||||
u_char breg[2];
|
u_char breg[2];
|
||||||
} a;
|
} a;
|
||||||
int i, r2, r3, ret=0;*/
|
int i, r2, r3, ret=0;*/
|
||||||
int r2, r3;
|
int r2;
|
||||||
|
|
||||||
/* Read r2 and r3 */
|
/* Read r2 and r3 */
|
||||||
r2 = mii_rd(MII_ID0, phyaddr, ioaddr);
|
r2 = mii_rd(MII_ID0, phyaddr, ioaddr);
|
||||||
r3 = mii_rd(MII_ID1, phyaddr, ioaddr);
|
mii_rd(MII_ID1, phyaddr, ioaddr);
|
||||||
/* SEEQ and Cypress way * /
|
/* SEEQ and Cypress way * /
|
||||||
/ * Shuffle r2 and r3 * /
|
/ * Shuffle r2 and r3 * /
|
||||||
a.reg=0;
|
a.reg=0;
|
||||||
|
|
|
@ -319,13 +319,8 @@ void tulip_select_media(struct net_device *dev, int startup)
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case 5: case 6: {
|
case 5: case 6: {
|
||||||
u16 setup[5];
|
|
||||||
|
|
||||||
new_csr6 = 0; /* FIXME */
|
new_csr6 = 0; /* FIXME */
|
||||||
|
|
||||||
for (i = 0; i < 5; i++)
|
|
||||||
setup[i] = get_u16(&p[i*2 + 1]);
|
|
||||||
|
|
||||||
if (startup && mtable->has_reset) {
|
if (startup && mtable->has_reset) {
|
||||||
struct medialeaf *rleaf = &mtable->mleaf[mtable->has_reset];
|
struct medialeaf *rleaf = &mtable->mleaf[mtable->has_reset];
|
||||||
unsigned char *rst = rleaf->leafdata;
|
unsigned char *rst = rleaf->leafdata;
|
||||||
|
|
|
@ -3221,7 +3221,9 @@ static int fec_enet_init(struct net_device *ndev)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
fec_enet_alloc_queue(ndev);
|
ret = fec_enet_alloc_queue(ndev);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
bd_size = (fep->total_tx_ring_size + fep->total_rx_ring_size) * dsize;
|
bd_size = (fep->total_tx_ring_size + fep->total_rx_ring_size) * dsize;
|
||||||
|
|
||||||
|
@ -3229,7 +3231,8 @@ static int fec_enet_init(struct net_device *ndev)
|
||||||
cbd_base = dmam_alloc_coherent(&fep->pdev->dev, bd_size, &bd_dma,
|
cbd_base = dmam_alloc_coherent(&fep->pdev->dev, bd_size, &bd_dma,
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!cbd_base) {
|
if (!cbd_base) {
|
||||||
return -ENOMEM;
|
ret = -ENOMEM;
|
||||||
|
goto free_queue_mem;
|
||||||
}
|
}
|
||||||
|
|
||||||
memset(cbd_base, 0, bd_size);
|
memset(cbd_base, 0, bd_size);
|
||||||
|
@ -3309,6 +3312,10 @@ static int fec_enet_init(struct net_device *ndev)
|
||||||
fec_enet_update_ethtool_stats(ndev);
|
fec_enet_update_ethtool_stats(ndev);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
free_queue_mem:
|
||||||
|
fec_enet_free_queue(ndev);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_OF
|
#ifdef CONFIG_OF
|
||||||
|
|
|
@ -547,6 +547,11 @@ static int fmvj18x_get_hwinfo(struct pcmcia_device *link, u_char *node_id)
|
||||||
return -1;
|
return -1;
|
||||||
|
|
||||||
base = ioremap(link->resource[2]->start, resource_size(link->resource[2]));
|
base = ioremap(link->resource[2]->start, resource_size(link->resource[2]));
|
||||||
|
if (!base) {
|
||||||
|
pcmcia_release_window(link, link->resource[2]);
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
pcmcia_map_mem_page(link, link->resource[2], 0);
|
pcmcia_map_mem_page(link, link->resource[2], 0);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -702,8 +702,6 @@ static bool hns3_tunnel_csum_bug(struct sk_buff *skb)
|
||||||
if (!(!skb->encapsulation && l4.udp->dest == htons(IANA_VXLAN_PORT)))
|
if (!(!skb->encapsulation && l4.udp->dest == htons(IANA_VXLAN_PORT)))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
skb_checksum_help(skb);
|
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -764,8 +762,7 @@ static int hns3_set_l3l4_type_csum(struct sk_buff *skb, u8 ol4_proto,
|
||||||
/* the stack computes the IP header already,
|
/* the stack computes the IP header already,
|
||||||
* driver calculate l4 checksum when not TSO.
|
* driver calculate l4 checksum when not TSO.
|
||||||
*/
|
*/
|
||||||
skb_checksum_help(skb);
|
return skb_checksum_help(skb);
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
l3.hdr = skb_inner_network_header(skb);
|
l3.hdr = skb_inner_network_header(skb);
|
||||||
|
@ -796,7 +793,7 @@ static int hns3_set_l3l4_type_csum(struct sk_buff *skb, u8 ol4_proto,
|
||||||
break;
|
break;
|
||||||
case IPPROTO_UDP:
|
case IPPROTO_UDP:
|
||||||
if (hns3_tunnel_csum_bug(skb))
|
if (hns3_tunnel_csum_bug(skb))
|
||||||
break;
|
return skb_checksum_help(skb);
|
||||||
|
|
||||||
hnae3_set_bit(*type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1);
|
hnae3_set_bit(*type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1);
|
||||||
hnae3_set_field(*type_cs_vlan_tso,
|
hnae3_set_field(*type_cs_vlan_tso,
|
||||||
|
@ -821,8 +818,7 @@ static int hns3_set_l3l4_type_csum(struct sk_buff *skb, u8 ol4_proto,
|
||||||
/* the stack computes the IP header already,
|
/* the stack computes the IP header already,
|
||||||
* driver calculate l4 checksum when not TSO.
|
* driver calculate l4 checksum when not TSO.
|
||||||
*/
|
*/
|
||||||
skb_checksum_help(skb);
|
return skb_checksum_help(skb);
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -467,12 +467,16 @@ static int ixgbe_set_vf_vlan(struct ixgbe_adapter *adapter, int add, int vid,
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
|
static int ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 max_frame, u32 vf)
|
||||||
{
|
{
|
||||||
struct ixgbe_hw *hw = &adapter->hw;
|
struct ixgbe_hw *hw = &adapter->hw;
|
||||||
int max_frame = msgbuf[1];
|
|
||||||
u32 max_frs;
|
u32 max_frs;
|
||||||
|
|
||||||
|
if (max_frame < ETH_MIN_MTU || max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) {
|
||||||
|
e_err(drv, "VF max_frame %d out of range\n", max_frame);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* For 82599EB we have to keep all PFs and VFs operating with
|
* For 82599EB we have to keep all PFs and VFs operating with
|
||||||
* the same max_frame value in order to avoid sending an oversize
|
* the same max_frame value in order to avoid sending an oversize
|
||||||
|
@ -532,12 +536,6 @@ static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* MTU < 68 is an error and causes problems on some kernels */
|
|
||||||
if (max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) {
|
|
||||||
e_err(drv, "VF max_frame %d out of range\n", max_frame);
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* pull current max frame size from hardware */
|
/* pull current max frame size from hardware */
|
||||||
max_frs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
|
max_frs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
|
||||||
max_frs &= IXGBE_MHADD_MFS_MASK;
|
max_frs &= IXGBE_MHADD_MFS_MASK;
|
||||||
|
@ -1240,7 +1238,7 @@ static int ixgbe_rcv_msg_from_vf(struct ixgbe_adapter *adapter, u32 vf)
|
||||||
retval = ixgbe_set_vf_vlan_msg(adapter, msgbuf, vf);
|
retval = ixgbe_set_vf_vlan_msg(adapter, msgbuf, vf);
|
||||||
break;
|
break;
|
||||||
case IXGBE_VF_SET_LPE:
|
case IXGBE_VF_SET_LPE:
|
||||||
retval = ixgbe_set_vf_lpe(adapter, msgbuf, vf);
|
retval = ixgbe_set_vf_lpe(adapter, msgbuf[1], vf);
|
||||||
break;
|
break;
|
||||||
case IXGBE_VF_SET_MACVLAN:
|
case IXGBE_VF_SET_MACVLAN:
|
||||||
retval = ixgbe_set_vf_macvlan_msg(adapter, msgbuf, vf);
|
retval = ixgbe_set_vf_macvlan_msg(adapter, msgbuf, vf);
|
||||||
|
|
|
@ -2011,8 +2011,6 @@ static int mlx4_en_set_tunable(struct net_device *dev,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
#define MLX4_EEPROM_PAGE_LEN 256
|
|
||||||
|
|
||||||
static int mlx4_en_get_module_info(struct net_device *dev,
|
static int mlx4_en_get_module_info(struct net_device *dev,
|
||||||
struct ethtool_modinfo *modinfo)
|
struct ethtool_modinfo *modinfo)
|
||||||
{
|
{
|
||||||
|
@ -2047,7 +2045,7 @@ static int mlx4_en_get_module_info(struct net_device *dev,
|
||||||
break;
|
break;
|
||||||
case MLX4_MODULE_ID_SFP:
|
case MLX4_MODULE_ID_SFP:
|
||||||
modinfo->type = ETH_MODULE_SFF_8472;
|
modinfo->type = ETH_MODULE_SFF_8472;
|
||||||
modinfo->eeprom_len = MLX4_EEPROM_PAGE_LEN;
|
modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
|
@ -862,6 +862,7 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||||
struct mlx4_en_tx_desc *tx_desc;
|
struct mlx4_en_tx_desc *tx_desc;
|
||||||
struct mlx4_wqe_data_seg *data;
|
struct mlx4_wqe_data_seg *data;
|
||||||
struct mlx4_en_tx_info *tx_info;
|
struct mlx4_en_tx_info *tx_info;
|
||||||
|
u32 __maybe_unused ring_cons;
|
||||||
int tx_ind;
|
int tx_ind;
|
||||||
int nr_txbb;
|
int nr_txbb;
|
||||||
int desc_size;
|
int desc_size;
|
||||||
|
@ -875,7 +876,6 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||||
bool stop_queue;
|
bool stop_queue;
|
||||||
bool inline_ok;
|
bool inline_ok;
|
||||||
u8 data_offset;
|
u8 data_offset;
|
||||||
u32 ring_cons;
|
|
||||||
bool bf_ok;
|
bool bf_ok;
|
||||||
|
|
||||||
tx_ind = skb_get_queue_mapping(skb);
|
tx_ind = skb_get_queue_mapping(skb);
|
||||||
|
|
|
@ -1973,6 +1973,7 @@ EXPORT_SYMBOL(mlx4_get_roce_gid_from_slave);
|
||||||
#define I2C_ADDR_LOW 0x50
|
#define I2C_ADDR_LOW 0x50
|
||||||
#define I2C_ADDR_HIGH 0x51
|
#define I2C_ADDR_HIGH 0x51
|
||||||
#define I2C_PAGE_SIZE 256
|
#define I2C_PAGE_SIZE 256
|
||||||
|
#define I2C_HIGH_PAGE_SIZE 128
|
||||||
|
|
||||||
/* Module Info Data */
|
/* Module Info Data */
|
||||||
struct mlx4_cable_info {
|
struct mlx4_cable_info {
|
||||||
|
@ -2026,6 +2027,88 @@ static inline const char *cable_info_mad_err_str(u16 mad_status)
|
||||||
return "Unknown Error";
|
return "Unknown Error";
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int mlx4_get_module_id(struct mlx4_dev *dev, u8 port, u8 *module_id)
|
||||||
|
{
|
||||||
|
struct mlx4_cmd_mailbox *inbox, *outbox;
|
||||||
|
struct mlx4_mad_ifc *inmad, *outmad;
|
||||||
|
struct mlx4_cable_info *cable_info;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
inbox = mlx4_alloc_cmd_mailbox(dev);
|
||||||
|
if (IS_ERR(inbox))
|
||||||
|
return PTR_ERR(inbox);
|
||||||
|
|
||||||
|
outbox = mlx4_alloc_cmd_mailbox(dev);
|
||||||
|
if (IS_ERR(outbox)) {
|
||||||
|
mlx4_free_cmd_mailbox(dev, inbox);
|
||||||
|
return PTR_ERR(outbox);
|
||||||
|
}
|
||||||
|
|
||||||
|
inmad = (struct mlx4_mad_ifc *)(inbox->buf);
|
||||||
|
outmad = (struct mlx4_mad_ifc *)(outbox->buf);
|
||||||
|
|
||||||
|
inmad->method = 0x1; /* Get */
|
||||||
|
inmad->class_version = 0x1;
|
||||||
|
inmad->mgmt_class = 0x1;
|
||||||
|
inmad->base_version = 0x1;
|
||||||
|
inmad->attr_id = cpu_to_be16(0xFF60); /* Module Info */
|
||||||
|
|
||||||
|
cable_info = (struct mlx4_cable_info *)inmad->data;
|
||||||
|
cable_info->dev_mem_address = 0;
|
||||||
|
cable_info->page_num = 0;
|
||||||
|
cable_info->i2c_addr = I2C_ADDR_LOW;
|
||||||
|
cable_info->size = cpu_to_be16(1);
|
||||||
|
|
||||||
|
ret = mlx4_cmd_box(dev, inbox->dma, outbox->dma, port, 3,
|
||||||
|
MLX4_CMD_MAD_IFC, MLX4_CMD_TIME_CLASS_C,
|
||||||
|
MLX4_CMD_NATIVE);
|
||||||
|
if (ret)
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
if (be16_to_cpu(outmad->status)) {
|
||||||
|
/* Mad returned with bad status */
|
||||||
|
ret = be16_to_cpu(outmad->status);
|
||||||
|
mlx4_warn(dev,
|
||||||
|
"MLX4_CMD_MAD_IFC Get Module ID attr(%x) port(%d) i2c_addr(%x) offset(%d) size(%d): Response Mad Status(%x) - %s\n",
|
||||||
|
0xFF60, port, I2C_ADDR_LOW, 0, 1, ret,
|
||||||
|
cable_info_mad_err_str(ret));
|
||||||
|
ret = -ret;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
cable_info = (struct mlx4_cable_info *)outmad->data;
|
||||||
|
*module_id = cable_info->data[0];
|
||||||
|
out:
|
||||||
|
mlx4_free_cmd_mailbox(dev, inbox);
|
||||||
|
mlx4_free_cmd_mailbox(dev, outbox);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mlx4_sfp_eeprom_params_set(u8 *i2c_addr, u8 *page_num, u16 *offset)
|
||||||
|
{
|
||||||
|
*i2c_addr = I2C_ADDR_LOW;
|
||||||
|
*page_num = 0;
|
||||||
|
|
||||||
|
if (*offset < I2C_PAGE_SIZE)
|
||||||
|
return;
|
||||||
|
|
||||||
|
*i2c_addr = I2C_ADDR_HIGH;
|
||||||
|
*offset -= I2C_PAGE_SIZE;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mlx4_qsfp_eeprom_params_set(u8 *i2c_addr, u8 *page_num, u16 *offset)
|
||||||
|
{
|
||||||
|
/* Offsets 0-255 belong to page 0.
|
||||||
|
* Offsets 256-639 belong to pages 01, 02, 03.
|
||||||
|
* For example, offset 400 is page 02: 1 + (400 - 256) / 128 = 2
|
||||||
|
*/
|
||||||
|
if (*offset < I2C_PAGE_SIZE)
|
||||||
|
*page_num = 0;
|
||||||
|
else
|
||||||
|
*page_num = 1 + (*offset - I2C_PAGE_SIZE) / I2C_HIGH_PAGE_SIZE;
|
||||||
|
*i2c_addr = I2C_ADDR_LOW;
|
||||||
|
*offset -= *page_num * I2C_HIGH_PAGE_SIZE;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* mlx4_get_module_info - Read cable module eeprom data
|
* mlx4_get_module_info - Read cable module eeprom data
|
||||||
* @dev: mlx4_dev.
|
* @dev: mlx4_dev.
|
||||||
|
@ -2045,12 +2128,30 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port,
|
||||||
struct mlx4_cmd_mailbox *inbox, *outbox;
|
struct mlx4_cmd_mailbox *inbox, *outbox;
|
||||||
struct mlx4_mad_ifc *inmad, *outmad;
|
struct mlx4_mad_ifc *inmad, *outmad;
|
||||||
struct mlx4_cable_info *cable_info;
|
struct mlx4_cable_info *cable_info;
|
||||||
u16 i2c_addr;
|
u8 module_id, i2c_addr, page_num;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (size > MODULE_INFO_MAX_READ)
|
if (size > MODULE_INFO_MAX_READ)
|
||||||
size = MODULE_INFO_MAX_READ;
|
size = MODULE_INFO_MAX_READ;
|
||||||
|
|
||||||
|
ret = mlx4_get_module_id(dev, port, &module_id);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
switch (module_id) {
|
||||||
|
case MLX4_MODULE_ID_SFP:
|
||||||
|
mlx4_sfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
|
||||||
|
break;
|
||||||
|
case MLX4_MODULE_ID_QSFP:
|
||||||
|
case MLX4_MODULE_ID_QSFP_PLUS:
|
||||||
|
case MLX4_MODULE_ID_QSFP28:
|
||||||
|
mlx4_qsfp_eeprom_params_set(&i2c_addr, &page_num, &offset);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
mlx4_err(dev, "Module ID not recognized: %#x\n", module_id);
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
inbox = mlx4_alloc_cmd_mailbox(dev);
|
inbox = mlx4_alloc_cmd_mailbox(dev);
|
||||||
if (IS_ERR(inbox))
|
if (IS_ERR(inbox))
|
||||||
return PTR_ERR(inbox);
|
return PTR_ERR(inbox);
|
||||||
|
@ -2076,11 +2177,9 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port,
|
||||||
*/
|
*/
|
||||||
size -= offset + size - I2C_PAGE_SIZE;
|
size -= offset + size - I2C_PAGE_SIZE;
|
||||||
|
|
||||||
i2c_addr = I2C_ADDR_LOW;
|
|
||||||
|
|
||||||
cable_info = (struct mlx4_cable_info *)inmad->data;
|
cable_info = (struct mlx4_cable_info *)inmad->data;
|
||||||
cable_info->dev_mem_address = cpu_to_be16(offset);
|
cable_info->dev_mem_address = cpu_to_be16(offset);
|
||||||
cable_info->page_num = 0;
|
cable_info->page_num = page_num;
|
||||||
cable_info->i2c_addr = i2c_addr;
|
cable_info->i2c_addr = i2c_addr;
|
||||||
cable_info->size = cpu_to_be16(size);
|
cable_info->size = cpu_to_be16(size);
|
||||||
|
|
||||||
|
|
|
@ -1657,8 +1657,7 @@ static inline void set_tx_len(struct ksz_desc *desc, u32 len)
|
||||||
|
|
||||||
#define HW_DELAY(hw, reg) \
|
#define HW_DELAY(hw, reg) \
|
||||||
do { \
|
do { \
|
||||||
u16 dummy; \
|
readw(hw->io + reg); \
|
||||||
dummy = readw(hw->io + reg); \
|
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -156,9 +156,8 @@ static void lan743x_tx_isr(void *context, u32 int_sts, u32 flags)
|
||||||
struct lan743x_tx *tx = context;
|
struct lan743x_tx *tx = context;
|
||||||
struct lan743x_adapter *adapter = tx->adapter;
|
struct lan743x_adapter *adapter = tx->adapter;
|
||||||
bool enable_flag = true;
|
bool enable_flag = true;
|
||||||
u32 int_en = 0;
|
|
||||||
|
|
||||||
int_en = lan743x_csr_read(adapter, INT_EN_SET);
|
lan743x_csr_read(adapter, INT_EN_SET);
|
||||||
if (flags & LAN743X_VECTOR_FLAG_SOURCE_ENABLE_CLEAR) {
|
if (flags & LAN743X_VECTOR_FLAG_SOURCE_ENABLE_CLEAR) {
|
||||||
lan743x_csr_write(adapter, INT_EN_CLR,
|
lan743x_csr_write(adapter, INT_EN_CLR,
|
||||||
INT_BIT_DMA_TX_(tx->channel_number));
|
INT_BIT_DMA_TX_(tx->channel_number));
|
||||||
|
@ -1635,10 +1634,9 @@ static int lan743x_tx_napi_poll(struct napi_struct *napi, int weight)
|
||||||
bool start_transmitter = false;
|
bool start_transmitter = false;
|
||||||
unsigned long irq_flags = 0;
|
unsigned long irq_flags = 0;
|
||||||
u32 ioc_bit = 0;
|
u32 ioc_bit = 0;
|
||||||
u32 int_sts = 0;
|
|
||||||
|
|
||||||
ioc_bit = DMAC_INT_BIT_TX_IOC_(tx->channel_number);
|
ioc_bit = DMAC_INT_BIT_TX_IOC_(tx->channel_number);
|
||||||
int_sts = lan743x_csr_read(adapter, DMAC_INT_STS);
|
lan743x_csr_read(adapter, DMAC_INT_STS);
|
||||||
if (tx->vector_flags & LAN743X_VECTOR_FLAG_SOURCE_STATUS_W2C)
|
if (tx->vector_flags & LAN743X_VECTOR_FLAG_SOURCE_STATUS_W2C)
|
||||||
lan743x_csr_write(adapter, DMAC_INT_STS, ioc_bit);
|
lan743x_csr_write(adapter, DMAC_INT_STS, ioc_bit);
|
||||||
spin_lock_irqsave(&tx->ring_lock, irq_flags);
|
spin_lock_irqsave(&tx->ring_lock, irq_flags);
|
||||||
|
|
|
@ -29,8 +29,6 @@
|
||||||
*/
|
*/
|
||||||
enum vxge_hw_status vxge_hw_vpath_intr_enable(struct __vxge_hw_vpath_handle *vp)
|
enum vxge_hw_status vxge_hw_vpath_intr_enable(struct __vxge_hw_vpath_handle *vp)
|
||||||
{
|
{
|
||||||
u64 val64;
|
|
||||||
|
|
||||||
struct __vxge_hw_virtualpath *vpath;
|
struct __vxge_hw_virtualpath *vpath;
|
||||||
struct vxge_hw_vpath_reg __iomem *vp_reg;
|
struct vxge_hw_vpath_reg __iomem *vp_reg;
|
||||||
enum vxge_hw_status status = VXGE_HW_OK;
|
enum vxge_hw_status status = VXGE_HW_OK;
|
||||||
|
@ -83,7 +81,7 @@ enum vxge_hw_status vxge_hw_vpath_intr_enable(struct __vxge_hw_vpath_handle *vp)
|
||||||
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
|
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
|
||||||
&vp_reg->xgmac_vp_int_status);
|
&vp_reg->xgmac_vp_int_status);
|
||||||
|
|
||||||
val64 = readq(&vp_reg->vpath_general_int_status);
|
readq(&vp_reg->vpath_general_int_status);
|
||||||
|
|
||||||
/* Mask unwanted interrupts */
|
/* Mask unwanted interrupts */
|
||||||
|
|
||||||
|
@ -156,8 +154,6 @@ exit:
|
||||||
enum vxge_hw_status vxge_hw_vpath_intr_disable(
|
enum vxge_hw_status vxge_hw_vpath_intr_disable(
|
||||||
struct __vxge_hw_vpath_handle *vp)
|
struct __vxge_hw_vpath_handle *vp)
|
||||||
{
|
{
|
||||||
u64 val64;
|
|
||||||
|
|
||||||
struct __vxge_hw_virtualpath *vpath;
|
struct __vxge_hw_virtualpath *vpath;
|
||||||
enum vxge_hw_status status = VXGE_HW_OK;
|
enum vxge_hw_status status = VXGE_HW_OK;
|
||||||
struct vxge_hw_vpath_reg __iomem *vp_reg;
|
struct vxge_hw_vpath_reg __iomem *vp_reg;
|
||||||
|
@ -178,8 +174,6 @@ enum vxge_hw_status vxge_hw_vpath_intr_disable(
|
||||||
(u32)VXGE_HW_INTR_MASK_ALL,
|
(u32)VXGE_HW_INTR_MASK_ALL,
|
||||||
&vp_reg->vpath_general_int_mask);
|
&vp_reg->vpath_general_int_mask);
|
||||||
|
|
||||||
val64 = VXGE_HW_TIM_CLR_INT_EN_VP(1 << (16 - vpath->vp_id));
|
|
||||||
|
|
||||||
writeq(VXGE_HW_INTR_MASK_ALL, &vp_reg->kdfcctl_errors_mask);
|
writeq(VXGE_HW_INTR_MASK_ALL, &vp_reg->kdfcctl_errors_mask);
|
||||||
|
|
||||||
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
|
__vxge_hw_pio_mem_write32_upper((u32)VXGE_HW_INTR_MASK_ALL,
|
||||||
|
@ -486,9 +480,7 @@ void vxge_hw_device_unmask_all(struct __vxge_hw_device *hldev)
|
||||||
*/
|
*/
|
||||||
void vxge_hw_device_flush_io(struct __vxge_hw_device *hldev)
|
void vxge_hw_device_flush_io(struct __vxge_hw_device *hldev)
|
||||||
{
|
{
|
||||||
u32 val32;
|
readl(&hldev->common_reg->titan_general_int_status);
|
||||||
|
|
||||||
val32 = readl(&hldev->common_reg->titan_general_int_status);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -1726,8 +1718,8 @@ void vxge_hw_fifo_txdl_free(struct __vxge_hw_fifo *fifo, void *txdlh)
|
||||||
enum vxge_hw_status
|
enum vxge_hw_status
|
||||||
vxge_hw_vpath_mac_addr_add(
|
vxge_hw_vpath_mac_addr_add(
|
||||||
struct __vxge_hw_vpath_handle *vp,
|
struct __vxge_hw_vpath_handle *vp,
|
||||||
u8 (macaddr)[ETH_ALEN],
|
u8 *macaddr,
|
||||||
u8 (macaddr_mask)[ETH_ALEN],
|
u8 *macaddr_mask,
|
||||||
enum vxge_hw_vpath_mac_addr_add_mode duplicate_mode)
|
enum vxge_hw_vpath_mac_addr_add_mode duplicate_mode)
|
||||||
{
|
{
|
||||||
u32 i;
|
u32 i;
|
||||||
|
@ -1789,8 +1781,8 @@ exit:
|
||||||
enum vxge_hw_status
|
enum vxge_hw_status
|
||||||
vxge_hw_vpath_mac_addr_get(
|
vxge_hw_vpath_mac_addr_get(
|
||||||
struct __vxge_hw_vpath_handle *vp,
|
struct __vxge_hw_vpath_handle *vp,
|
||||||
u8 (macaddr)[ETH_ALEN],
|
u8 *macaddr,
|
||||||
u8 (macaddr_mask)[ETH_ALEN])
|
u8 *macaddr_mask)
|
||||||
{
|
{
|
||||||
u32 i;
|
u32 i;
|
||||||
u64 data1 = 0ULL;
|
u64 data1 = 0ULL;
|
||||||
|
@ -1841,8 +1833,8 @@ exit:
|
||||||
enum vxge_hw_status
|
enum vxge_hw_status
|
||||||
vxge_hw_vpath_mac_addr_get_next(
|
vxge_hw_vpath_mac_addr_get_next(
|
||||||
struct __vxge_hw_vpath_handle *vp,
|
struct __vxge_hw_vpath_handle *vp,
|
||||||
u8 (macaddr)[ETH_ALEN],
|
u8 *macaddr,
|
||||||
u8 (macaddr_mask)[ETH_ALEN])
|
u8 *macaddr_mask)
|
||||||
{
|
{
|
||||||
u32 i;
|
u32 i;
|
||||||
u64 data1 = 0ULL;
|
u64 data1 = 0ULL;
|
||||||
|
@ -1894,8 +1886,8 @@ exit:
|
||||||
enum vxge_hw_status
|
enum vxge_hw_status
|
||||||
vxge_hw_vpath_mac_addr_delete(
|
vxge_hw_vpath_mac_addr_delete(
|
||||||
struct __vxge_hw_vpath_handle *vp,
|
struct __vxge_hw_vpath_handle *vp,
|
||||||
u8 (macaddr)[ETH_ALEN],
|
u8 *macaddr,
|
||||||
u8 (macaddr_mask)[ETH_ALEN])
|
u8 *macaddr_mask)
|
||||||
{
|
{
|
||||||
u32 i;
|
u32 i;
|
||||||
u64 data1 = 0ULL;
|
u64 data1 = 0ULL;
|
||||||
|
@ -2385,7 +2377,6 @@ enum vxge_hw_status vxge_hw_vpath_poll_rx(struct __vxge_hw_ring *ring)
|
||||||
u8 t_code;
|
u8 t_code;
|
||||||
enum vxge_hw_status status = VXGE_HW_OK;
|
enum vxge_hw_status status = VXGE_HW_OK;
|
||||||
void *first_rxdh;
|
void *first_rxdh;
|
||||||
u64 val64 = 0;
|
|
||||||
int new_count = 0;
|
int new_count = 0;
|
||||||
|
|
||||||
ring->cmpl_cnt = 0;
|
ring->cmpl_cnt = 0;
|
||||||
|
@ -2413,8 +2404,7 @@ enum vxge_hw_status vxge_hw_vpath_poll_rx(struct __vxge_hw_ring *ring)
|
||||||
}
|
}
|
||||||
writeq(VXGE_HW_PRC_RXD_DOORBELL_NEW_QW_CNT(new_count),
|
writeq(VXGE_HW_PRC_RXD_DOORBELL_NEW_QW_CNT(new_count),
|
||||||
&ring->vp_reg->prc_rxd_doorbell);
|
&ring->vp_reg->prc_rxd_doorbell);
|
||||||
val64 =
|
readl(&ring->common_reg->titan_general_int_status);
|
||||||
readl(&ring->common_reg->titan_general_int_status);
|
|
||||||
ring->doorbell_cnt = 0;
|
ring->doorbell_cnt = 0;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -873,17 +873,12 @@ static u16 ef4_farch_handle_rx_not_ok(struct ef4_rx_queue *rx_queue,
|
||||||
{
|
{
|
||||||
struct ef4_channel *channel = ef4_rx_queue_channel(rx_queue);
|
struct ef4_channel *channel = ef4_rx_queue_channel(rx_queue);
|
||||||
struct ef4_nic *efx = rx_queue->efx;
|
struct ef4_nic *efx = rx_queue->efx;
|
||||||
bool rx_ev_buf_owner_id_err, rx_ev_ip_hdr_chksum_err;
|
bool __maybe_unused rx_ev_buf_owner_id_err, rx_ev_ip_hdr_chksum_err;
|
||||||
bool rx_ev_tcp_udp_chksum_err, rx_ev_eth_crc_err;
|
bool rx_ev_tcp_udp_chksum_err, rx_ev_eth_crc_err;
|
||||||
bool rx_ev_frm_trunc, rx_ev_drib_nib, rx_ev_tobe_disc;
|
bool rx_ev_frm_trunc, rx_ev_drib_nib, rx_ev_tobe_disc;
|
||||||
bool rx_ev_other_err, rx_ev_pause_frm;
|
bool rx_ev_pause_frm;
|
||||||
bool rx_ev_hdr_type, rx_ev_mcast_pkt;
|
|
||||||
unsigned rx_ev_pkt_type;
|
|
||||||
|
|
||||||
rx_ev_hdr_type = EF4_QWORD_FIELD(*event, FSF_AZ_RX_EV_HDR_TYPE);
|
|
||||||
rx_ev_mcast_pkt = EF4_QWORD_FIELD(*event, FSF_AZ_RX_EV_MCAST_PKT);
|
|
||||||
rx_ev_tobe_disc = EF4_QWORD_FIELD(*event, FSF_AZ_RX_EV_TOBE_DISC);
|
rx_ev_tobe_disc = EF4_QWORD_FIELD(*event, FSF_AZ_RX_EV_TOBE_DISC);
|
||||||
rx_ev_pkt_type = EF4_QWORD_FIELD(*event, FSF_AZ_RX_EV_PKT_TYPE);
|
|
||||||
rx_ev_buf_owner_id_err = EF4_QWORD_FIELD(*event,
|
rx_ev_buf_owner_id_err = EF4_QWORD_FIELD(*event,
|
||||||
FSF_AZ_RX_EV_BUF_OWNER_ID_ERR);
|
FSF_AZ_RX_EV_BUF_OWNER_ID_ERR);
|
||||||
rx_ev_ip_hdr_chksum_err = EF4_QWORD_FIELD(*event,
|
rx_ev_ip_hdr_chksum_err = EF4_QWORD_FIELD(*event,
|
||||||
|
@ -896,10 +891,6 @@ static u16 ef4_farch_handle_rx_not_ok(struct ef4_rx_queue *rx_queue,
|
||||||
0 : EF4_QWORD_FIELD(*event, FSF_AA_RX_EV_DRIB_NIB));
|
0 : EF4_QWORD_FIELD(*event, FSF_AA_RX_EV_DRIB_NIB));
|
||||||
rx_ev_pause_frm = EF4_QWORD_FIELD(*event, FSF_AZ_RX_EV_PAUSE_FRM_ERR);
|
rx_ev_pause_frm = EF4_QWORD_FIELD(*event, FSF_AZ_RX_EV_PAUSE_FRM_ERR);
|
||||||
|
|
||||||
/* Every error apart from tobe_disc and pause_frm */
|
|
||||||
rx_ev_other_err = (rx_ev_drib_nib | rx_ev_tcp_udp_chksum_err |
|
|
||||||
rx_ev_buf_owner_id_err | rx_ev_eth_crc_err |
|
|
||||||
rx_ev_frm_trunc | rx_ev_ip_hdr_chksum_err);
|
|
||||||
|
|
||||||
/* Count errors that are not in MAC stats. Ignore expected
|
/* Count errors that are not in MAC stats. Ignore expected
|
||||||
* checksum errors during self-test. */
|
* checksum errors during self-test. */
|
||||||
|
@ -919,6 +910,13 @@ static u16 ef4_farch_handle_rx_not_ok(struct ef4_rx_queue *rx_queue,
|
||||||
* to a FIFO overflow.
|
* to a FIFO overflow.
|
||||||
*/
|
*/
|
||||||
#ifdef DEBUG
|
#ifdef DEBUG
|
||||||
|
{
|
||||||
|
/* Every error apart from tobe_disc and pause_frm */
|
||||||
|
|
||||||
|
bool rx_ev_other_err = (rx_ev_drib_nib | rx_ev_tcp_udp_chksum_err |
|
||||||
|
rx_ev_buf_owner_id_err | rx_ev_eth_crc_err |
|
||||||
|
rx_ev_frm_trunc | rx_ev_ip_hdr_chksum_err);
|
||||||
|
|
||||||
if (rx_ev_other_err && net_ratelimit()) {
|
if (rx_ev_other_err && net_ratelimit()) {
|
||||||
netif_dbg(efx, rx_err, efx->net_dev,
|
netif_dbg(efx, rx_err, efx->net_dev,
|
||||||
" RX queue %d unexpected RX event "
|
" RX queue %d unexpected RX event "
|
||||||
|
@ -935,6 +933,7 @@ static u16 ef4_farch_handle_rx_not_ok(struct ef4_rx_queue *rx_queue,
|
||||||
rx_ev_tobe_disc ? " [TOBE_DISC]" : "",
|
rx_ev_tobe_disc ? " [TOBE_DISC]" : "",
|
||||||
rx_ev_pause_frm ? " [PAUSE]" : "");
|
rx_ev_pause_frm ? " [PAUSE]" : "");
|
||||||
}
|
}
|
||||||
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/* The frame must be discarded if any of these are true. */
|
/* The frame must be discarded if any of these are true. */
|
||||||
|
@ -1646,15 +1645,11 @@ void ef4_farch_rx_push_indir_table(struct ef4_nic *efx)
|
||||||
*/
|
*/
|
||||||
void ef4_farch_dimension_resources(struct ef4_nic *efx, unsigned sram_lim_qw)
|
void ef4_farch_dimension_resources(struct ef4_nic *efx, unsigned sram_lim_qw)
|
||||||
{
|
{
|
||||||
unsigned vi_count, buftbl_min;
|
unsigned vi_count;
|
||||||
|
|
||||||
/* Account for the buffer table entries backing the datapath channels
|
/* Account for the buffer table entries backing the datapath channels
|
||||||
* and the descriptor caches for those channels.
|
* and the descriptor caches for those channels.
|
||||||
*/
|
*/
|
||||||
buftbl_min = ((efx->n_rx_channels * EF4_MAX_DMAQ_SIZE +
|
|
||||||
efx->n_tx_channels * EF4_TXQ_TYPES * EF4_MAX_DMAQ_SIZE +
|
|
||||||
efx->n_channels * EF4_MAX_EVQ_SIZE)
|
|
||||||
* sizeof(ef4_qword_t) / EF4_BUF_SIZE);
|
|
||||||
vi_count = max(efx->n_channels, efx->n_tx_channels * EF4_TXQ_TYPES);
|
vi_count = max(efx->n_channels, efx->n_tx_channels * EF4_TXQ_TYPES);
|
||||||
|
|
||||||
efx->tx_dc_base = sram_lim_qw - vi_count * TX_DC_ENTRIES;
|
efx->tx_dc_base = sram_lim_qw - vi_count * TX_DC_ENTRIES;
|
||||||
|
@ -2535,7 +2530,6 @@ int ef4_farch_filter_remove_safe(struct ef4_nic *efx,
|
||||||
enum ef4_farch_filter_table_id table_id;
|
enum ef4_farch_filter_table_id table_id;
|
||||||
struct ef4_farch_filter_table *table;
|
struct ef4_farch_filter_table *table;
|
||||||
unsigned int filter_idx;
|
unsigned int filter_idx;
|
||||||
struct ef4_farch_filter_spec *spec;
|
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
table_id = ef4_farch_filter_id_table_id(filter_id);
|
table_id = ef4_farch_filter_id_table_id(filter_id);
|
||||||
|
@ -2546,7 +2540,6 @@ int ef4_farch_filter_remove_safe(struct ef4_nic *efx,
|
||||||
filter_idx = ef4_farch_filter_id_index(filter_id);
|
filter_idx = ef4_farch_filter_id_index(filter_id);
|
||||||
if (filter_idx >= table->size)
|
if (filter_idx >= table->size)
|
||||||
return -ENOENT;
|
return -ENOENT;
|
||||||
spec = &table->spec[filter_idx];
|
|
||||||
|
|
||||||
spin_lock_bh(&efx->filter_lock);
|
spin_lock_bh(&efx->filter_lock);
|
||||||
rc = ef4_farch_filter_remove(efx, table, filter_idx, priority);
|
rc = ef4_farch_filter_remove(efx, table, filter_idx, priority);
|
||||||
|
|
|
@ -783,10 +783,9 @@ static u16 sis900_default_phy(struct net_device * net_dev)
|
||||||
static void sis900_set_capability(struct net_device *net_dev, struct mii_phy *phy)
|
static void sis900_set_capability(struct net_device *net_dev, struct mii_phy *phy)
|
||||||
{
|
{
|
||||||
u16 cap;
|
u16 cap;
|
||||||
u16 status;
|
|
||||||
|
|
||||||
status = mdio_read(net_dev, phy->phy_addr, MII_STATUS);
|
mdio_read(net_dev, phy->phy_addr, MII_STATUS);
|
||||||
status = mdio_read(net_dev, phy->phy_addr, MII_STATUS);
|
mdio_read(net_dev, phy->phy_addr, MII_STATUS);
|
||||||
|
|
||||||
cap = MII_NWAY_CSMA_CD |
|
cap = MII_NWAY_CSMA_CD |
|
||||||
((phy->status & MII_STAT_CAN_TX_FDX)? MII_NWAY_TX_FDX:0) |
|
((phy->status & MII_STAT_CAN_TX_FDX)? MII_NWAY_TX_FDX:0) |
|
||||||
|
|
|
@ -513,7 +513,7 @@ void xlgmac_get_all_hw_features(struct xlgmac_pdata *pdata)
|
||||||
|
|
||||||
void xlgmac_print_all_hw_features(struct xlgmac_pdata *pdata)
|
void xlgmac_print_all_hw_features(struct xlgmac_pdata *pdata)
|
||||||
{
|
{
|
||||||
char *str = NULL;
|
char __maybe_unused *str = NULL;
|
||||||
|
|
||||||
XLGMAC_PR("\n");
|
XLGMAC_PR("\n");
|
||||||
XLGMAC_PR("=====================================================\n");
|
XLGMAC_PR("=====================================================\n");
|
||||||
|
|
|
@ -1240,7 +1240,7 @@ static int emac_poll(struct napi_struct *napi, int budget)
|
||||||
struct net_device *ndev = priv->ndev;
|
struct net_device *ndev = priv->ndev;
|
||||||
struct device *emac_dev = &ndev->dev;
|
struct device *emac_dev = &ndev->dev;
|
||||||
u32 status = 0;
|
u32 status = 0;
|
||||||
u32 num_tx_pkts = 0, num_rx_pkts = 0;
|
u32 num_rx_pkts = 0;
|
||||||
|
|
||||||
/* Check interrupt vectors and call packet processing */
|
/* Check interrupt vectors and call packet processing */
|
||||||
status = emac_read(EMAC_MACINVECTOR);
|
status = emac_read(EMAC_MACINVECTOR);
|
||||||
|
@ -1251,8 +1251,7 @@ static int emac_poll(struct napi_struct *napi, int budget)
|
||||||
mask = EMAC_DM646X_MAC_IN_VECTOR_TX_INT_VEC;
|
mask = EMAC_DM646X_MAC_IN_VECTOR_TX_INT_VEC;
|
||||||
|
|
||||||
if (status & mask) {
|
if (status & mask) {
|
||||||
num_tx_pkts = cpdma_chan_process(priv->txchan,
|
cpdma_chan_process(priv->txchan, EMAC_DEF_TX_MAX_SERVICE);
|
||||||
EMAC_DEF_TX_MAX_SERVICE);
|
|
||||||
} /* TX processing */
|
} /* TX processing */
|
||||||
|
|
||||||
mask = EMAC_DM644X_MAC_IN_VECTOR_RX_INT_VEC;
|
mask = EMAC_DM644X_MAC_IN_VECTOR_RX_INT_VEC;
|
||||||
|
|
|
@ -1364,9 +1364,9 @@ int netcp_txpipe_open(struct netcp_tx_pipe *tx_pipe)
|
||||||
tx_pipe->dma_queue = knav_queue_open(name, tx_pipe->dma_queue_id,
|
tx_pipe->dma_queue = knav_queue_open(name, tx_pipe->dma_queue_id,
|
||||||
KNAV_QUEUE_SHARED);
|
KNAV_QUEUE_SHARED);
|
||||||
if (IS_ERR(tx_pipe->dma_queue)) {
|
if (IS_ERR(tx_pipe->dma_queue)) {
|
||||||
|
ret = PTR_ERR(tx_pipe->dma_queue);
|
||||||
dev_err(dev, "Could not open DMA queue for channel \"%s\": %d\n",
|
dev_err(dev, "Could not open DMA queue for channel \"%s\": %d\n",
|
||||||
name, ret);
|
name, ret);
|
||||||
ret = PTR_ERR(tx_pipe->dma_queue);
|
|
||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -671,7 +671,6 @@ module_exit(tlan_exit);
|
||||||
static void __init tlan_eisa_probe(void)
|
static void __init tlan_eisa_probe(void)
|
||||||
{
|
{
|
||||||
long ioaddr;
|
long ioaddr;
|
||||||
int rc = -ENODEV;
|
|
||||||
int irq;
|
int irq;
|
||||||
u16 device_id;
|
u16 device_id;
|
||||||
|
|
||||||
|
@ -736,8 +735,7 @@ static void __init tlan_eisa_probe(void)
|
||||||
|
|
||||||
|
|
||||||
/* Setup the newly found eisa adapter */
|
/* Setup the newly found eisa adapter */
|
||||||
rc = tlan_probe1(NULL, ioaddr, irq,
|
tlan_probe1(NULL, ioaddr, irq, 12, NULL);
|
||||||
12, NULL);
|
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
out:
|
out:
|
||||||
|
|
|
@ -875,26 +875,13 @@ static u32 check_connection_type(struct mac_regs __iomem *regs)
|
||||||
*/
|
*/
|
||||||
static int velocity_set_media_mode(struct velocity_info *vptr, u32 mii_status)
|
static int velocity_set_media_mode(struct velocity_info *vptr, u32 mii_status)
|
||||||
{
|
{
|
||||||
u32 curr_status;
|
|
||||||
struct mac_regs __iomem *regs = vptr->mac_regs;
|
struct mac_regs __iomem *regs = vptr->mac_regs;
|
||||||
|
|
||||||
vptr->mii_status = mii_check_media_mode(vptr->mac_regs);
|
vptr->mii_status = mii_check_media_mode(vptr->mac_regs);
|
||||||
curr_status = vptr->mii_status & (~VELOCITY_LINK_FAIL);
|
|
||||||
|
|
||||||
/* Set mii link status */
|
/* Set mii link status */
|
||||||
set_mii_flow_control(vptr);
|
set_mii_flow_control(vptr);
|
||||||
|
|
||||||
/*
|
|
||||||
Check if new status is consistent with current status
|
|
||||||
if (((mii_status & curr_status) & VELOCITY_AUTONEG_ENABLE) ||
|
|
||||||
(mii_status==curr_status)) {
|
|
||||||
vptr->mii_status=mii_check_media_mode(vptr->mac_regs);
|
|
||||||
vptr->mii_status=check_connection_type(vptr->mac_regs);
|
|
||||||
VELOCITY_PRT(MSG_LEVEL_INFO, "Velocity link no change\n");
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
*/
|
|
||||||
|
|
||||||
if (PHYID_GET_PHY_ID(vptr->phy_id) == PHYID_CICADA_CS8201)
|
if (PHYID_GET_PHY_ID(vptr->phy_id) == PHYID_CICADA_CS8201)
|
||||||
MII_REG_BITS_ON(AUXCR_MDPPS, MII_NCONFIG, vptr->mac_regs);
|
MII_REG_BITS_ON(AUXCR_MDPPS, MII_NCONFIG, vptr->mac_regs);
|
||||||
|
|
||||||
|
|
|
@ -75,7 +75,6 @@ static int octeon_mdiobus_probe(struct platform_device *pdev)
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
fail_register:
|
fail_register:
|
||||||
mdiobus_free(bus->mii_bus);
|
|
||||||
smi_en.u64 = 0;
|
smi_en.u64 = 0;
|
||||||
oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN);
|
oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN);
|
||||||
return err;
|
return err;
|
||||||
|
@ -89,7 +88,6 @@ static int octeon_mdiobus_remove(struct platform_device *pdev)
|
||||||
bus = platform_get_drvdata(pdev);
|
bus = platform_get_drvdata(pdev);
|
||||||
|
|
||||||
mdiobus_unregister(bus->mii_bus);
|
mdiobus_unregister(bus->mii_bus);
|
||||||
mdiobus_free(bus->mii_bus);
|
|
||||||
smi_en.u64 = 0;
|
smi_en.u64 = 0;
|
||||||
oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN);
|
oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -129,7 +129,6 @@ static void thunder_mdiobus_pci_remove(struct pci_dev *pdev)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
mdiobus_unregister(bus->mii_bus);
|
mdiobus_unregister(bus->mii_bus);
|
||||||
mdiobus_free(bus->mii_bus);
|
|
||||||
oct_mdio_writeq(0, bus->register_base + SMI_EN);
|
oct_mdio_writeq(0, bus->register_base + SMI_EN);
|
||||||
}
|
}
|
||||||
pci_set_drvdata(pdev, NULL);
|
pci_set_drvdata(pdev, NULL);
|
||||||
|
|
|
@ -1703,7 +1703,7 @@ static int hso_serial_tiocmset(struct tty_struct *tty,
|
||||||
spin_unlock_irqrestore(&serial->serial_lock, flags);
|
spin_unlock_irqrestore(&serial->serial_lock, flags);
|
||||||
|
|
||||||
return usb_control_msg(serial->parent->usb,
|
return usb_control_msg(serial->parent->usb,
|
||||||
usb_rcvctrlpipe(serial->parent->usb, 0), 0x22,
|
usb_sndctrlpipe(serial->parent->usb, 0), 0x22,
|
||||||
0x21, val, if_num, NULL, 0,
|
0x21, val, if_num, NULL, 0,
|
||||||
USB_CTRL_SET_TIMEOUT);
|
USB_CTRL_SET_TIMEOUT);
|
||||||
}
|
}
|
||||||
|
@ -2450,7 +2450,7 @@ static int hso_rfkill_set_block(void *data, bool blocked)
|
||||||
if (hso_dev->usb_gone)
|
if (hso_dev->usb_gone)
|
||||||
rv = 0;
|
rv = 0;
|
||||||
else
|
else
|
||||||
rv = usb_control_msg(hso_dev->usb, usb_rcvctrlpipe(hso_dev->usb, 0),
|
rv = usb_control_msg(hso_dev->usb, usb_sndctrlpipe(hso_dev->usb, 0),
|
||||||
enabled ? 0x82 : 0x81, 0x40, 0, 0, NULL, 0,
|
enabled ? 0x82 : 0x81, 0x40, 0, 0, NULL, 0,
|
||||||
USB_CTRL_SET_TIMEOUT);
|
USB_CTRL_SET_TIMEOUT);
|
||||||
mutex_unlock(&hso_dev->mutex);
|
mutex_unlock(&hso_dev->mutex);
|
||||||
|
|
|
@ -1495,7 +1495,7 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
|
||||||
ret = smsc75xx_wait_ready(dev, 0);
|
ret = smsc75xx_wait_ready(dev, 0);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
netdev_warn(dev->net, "device not ready in smsc75xx_bind\n");
|
netdev_warn(dev->net, "device not ready in smsc75xx_bind\n");
|
||||||
return ret;
|
goto err;
|
||||||
}
|
}
|
||||||
|
|
||||||
smsc75xx_init_mac_address(dev);
|
smsc75xx_init_mac_address(dev);
|
||||||
|
@ -1504,7 +1504,7 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
|
||||||
ret = smsc75xx_reset(dev);
|
ret = smsc75xx_reset(dev);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
netdev_warn(dev->net, "smsc75xx_reset error %d\n", ret);
|
netdev_warn(dev->net, "smsc75xx_reset error %d\n", ret);
|
||||||
return ret;
|
goto err;
|
||||||
}
|
}
|
||||||
|
|
||||||
dev->net->netdev_ops = &smsc75xx_netdev_ops;
|
dev->net->netdev_ops = &smsc75xx_netdev_ops;
|
||||||
|
@ -1514,6 +1514,10 @@ static int smsc75xx_bind(struct usbnet *dev, struct usb_interface *intf)
|
||||||
dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len;
|
dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len;
|
||||||
dev->net->max_mtu = MAX_SINGLE_PACKET_SIZE;
|
dev->net->max_mtu = MAX_SINGLE_PACKET_SIZE;
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
err:
|
||||||
|
kfree(pdata);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void smsc75xx_unbind(struct usbnet *dev, struct usb_interface *intf)
|
static void smsc75xx_unbind(struct usbnet *dev, struct usb_interface *intf)
|
||||||
|
|
|
@ -1769,14 +1769,62 @@ static void ath10k_htt_rx_h_unchain(struct ath10k *ar,
|
||||||
ath10k_unchain_msdu(amsdu, unchain_cnt);
|
ath10k_unchain_msdu(amsdu, unchain_cnt);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool ath10k_htt_rx_validate_amsdu(struct ath10k *ar,
|
||||||
|
struct sk_buff_head *amsdu)
|
||||||
|
{
|
||||||
|
u8 *subframe_hdr;
|
||||||
|
struct sk_buff *first;
|
||||||
|
bool is_first, is_last;
|
||||||
|
struct htt_rx_desc *rxd;
|
||||||
|
struct ieee80211_hdr *hdr;
|
||||||
|
size_t hdr_len, crypto_len;
|
||||||
|
enum htt_rx_mpdu_encrypt_type enctype;
|
||||||
|
int bytes_aligned = ar->hw_params.decap_align_bytes;
|
||||||
|
|
||||||
|
first = skb_peek(amsdu);
|
||||||
|
|
||||||
|
rxd = (void *)first->data - sizeof(*rxd);
|
||||||
|
hdr = (void *)rxd->rx_hdr_status;
|
||||||
|
|
||||||
|
is_first = !!(rxd->msdu_end.common.info0 &
|
||||||
|
__cpu_to_le32(RX_MSDU_END_INFO0_FIRST_MSDU));
|
||||||
|
is_last = !!(rxd->msdu_end.common.info0 &
|
||||||
|
__cpu_to_le32(RX_MSDU_END_INFO0_LAST_MSDU));
|
||||||
|
|
||||||
|
/* Return in case of non-aggregated msdu */
|
||||||
|
if (is_first && is_last)
|
||||||
|
return true;
|
||||||
|
|
||||||
|
/* First msdu flag is not set for the first msdu of the list */
|
||||||
|
if (!is_first)
|
||||||
|
return false;
|
||||||
|
|
||||||
|
enctype = MS(__le32_to_cpu(rxd->mpdu_start.info0),
|
||||||
|
RX_MPDU_START_INFO0_ENCRYPT_TYPE);
|
||||||
|
|
||||||
|
hdr_len = ieee80211_hdrlen(hdr->frame_control);
|
||||||
|
crypto_len = ath10k_htt_rx_crypto_param_len(ar, enctype);
|
||||||
|
|
||||||
|
subframe_hdr = (u8 *)hdr + round_up(hdr_len, bytes_aligned) +
|
||||||
|
crypto_len;
|
||||||
|
|
||||||
|
/* Validate if the amsdu has a proper first subframe.
|
||||||
|
* There are chances a single msdu can be received as amsdu when
|
||||||
|
* the unauthenticated amsdu flag of a QoS header
|
||||||
|
* gets flipped in non-SPP AMSDU's, in such cases the first
|
||||||
|
* subframe has llc/snap header in place of a valid da.
|
||||||
|
* return false if the da matches rfc1042 pattern
|
||||||
|
*/
|
||||||
|
if (ether_addr_equal(subframe_hdr, rfc1042_header))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
static bool ath10k_htt_rx_amsdu_allowed(struct ath10k *ar,
|
static bool ath10k_htt_rx_amsdu_allowed(struct ath10k *ar,
|
||||||
struct sk_buff_head *amsdu,
|
struct sk_buff_head *amsdu,
|
||||||
struct ieee80211_rx_status *rx_status)
|
struct ieee80211_rx_status *rx_status)
|
||||||
{
|
{
|
||||||
/* FIXME: It might be a good idea to do some fuzzy-testing to drop
|
|
||||||
* invalid/dangerous frames.
|
|
||||||
*/
|
|
||||||
|
|
||||||
if (!rx_status->freq) {
|
if (!rx_status->freq) {
|
||||||
ath10k_dbg(ar, ATH10K_DBG_HTT, "no channel configured; ignoring frame(s)!\n");
|
ath10k_dbg(ar, ATH10K_DBG_HTT, "no channel configured; ignoring frame(s)!\n");
|
||||||
return false;
|
return false;
|
||||||
|
@ -1787,6 +1835,11 @@ static bool ath10k_htt_rx_amsdu_allowed(struct ath10k *ar,
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (!ath10k_htt_rx_validate_amsdu(ar, amsdu)) {
|
||||||
|
ath10k_dbg(ar, ATH10K_DBG_HTT, "invalid amsdu received\n");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -793,19 +793,6 @@ static const struct attribute_group mesh_ie_group = {
|
||||||
.attrs = mesh_ie_attrs,
|
.attrs = mesh_ie_attrs,
|
||||||
};
|
};
|
||||||
|
|
||||||
static void lbs_persist_config_init(struct net_device *dev)
|
|
||||||
{
|
|
||||||
int ret;
|
|
||||||
ret = sysfs_create_group(&(dev->dev.kobj), &boot_opts_group);
|
|
||||||
ret = sysfs_create_group(&(dev->dev.kobj), &mesh_ie_group);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void lbs_persist_config_remove(struct net_device *dev)
|
|
||||||
{
|
|
||||||
sysfs_remove_group(&(dev->dev.kobj), &boot_opts_group);
|
|
||||||
sysfs_remove_group(&(dev->dev.kobj), &mesh_ie_group);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/***************************************************************************
|
/***************************************************************************
|
||||||
* Initializing and starting, stopping mesh
|
* Initializing and starting, stopping mesh
|
||||||
|
@ -1005,6 +992,10 @@ static int lbs_add_mesh(struct lbs_private *priv)
|
||||||
SET_NETDEV_DEV(priv->mesh_dev, priv->dev->dev.parent);
|
SET_NETDEV_DEV(priv->mesh_dev, priv->dev->dev.parent);
|
||||||
|
|
||||||
mesh_dev->flags |= IFF_BROADCAST | IFF_MULTICAST;
|
mesh_dev->flags |= IFF_BROADCAST | IFF_MULTICAST;
|
||||||
|
mesh_dev->sysfs_groups[0] = &lbs_mesh_attr_group;
|
||||||
|
mesh_dev->sysfs_groups[1] = &boot_opts_group;
|
||||||
|
mesh_dev->sysfs_groups[2] = &mesh_ie_group;
|
||||||
|
|
||||||
/* Register virtual mesh interface */
|
/* Register virtual mesh interface */
|
||||||
ret = register_netdev(mesh_dev);
|
ret = register_netdev(mesh_dev);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
@ -1012,19 +1003,10 @@ static int lbs_add_mesh(struct lbs_private *priv)
|
||||||
goto err_free_netdev;
|
goto err_free_netdev;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = sysfs_create_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group);
|
|
||||||
if (ret)
|
|
||||||
goto err_unregister;
|
|
||||||
|
|
||||||
lbs_persist_config_init(mesh_dev);
|
|
||||||
|
|
||||||
/* Everything successful */
|
/* Everything successful */
|
||||||
ret = 0;
|
ret = 0;
|
||||||
goto done;
|
goto done;
|
||||||
|
|
||||||
err_unregister:
|
|
||||||
unregister_netdev(mesh_dev);
|
|
||||||
|
|
||||||
err_free_netdev:
|
err_free_netdev:
|
||||||
free_netdev(mesh_dev);
|
free_netdev(mesh_dev);
|
||||||
|
|
||||||
|
@ -1045,8 +1027,6 @@ void lbs_remove_mesh(struct lbs_private *priv)
|
||||||
|
|
||||||
netif_stop_queue(mesh_dev);
|
netif_stop_queue(mesh_dev);
|
||||||
netif_carrier_off(mesh_dev);
|
netif_carrier_off(mesh_dev);
|
||||||
sysfs_remove_group(&(mesh_dev->dev.kobj), &lbs_mesh_attr_group);
|
|
||||||
lbs_persist_config_remove(mesh_dev);
|
|
||||||
unregister_netdev(mesh_dev);
|
unregister_netdev(mesh_dev);
|
||||||
priv->mesh_dev = NULL;
|
priv->mesh_dev = NULL;
|
||||||
kfree(mesh_dev->ieee80211_ptr);
|
kfree(mesh_dev->ieee80211_ptr);
|
||||||
|
|
|
@ -30,12 +30,14 @@ MODULE_LICENSE("GPL");
|
||||||
MODULE_AUTHOR("Alex Hung");
|
MODULE_AUTHOR("Alex Hung");
|
||||||
MODULE_ALIAS("acpi*:HPQ6001:*");
|
MODULE_ALIAS("acpi*:HPQ6001:*");
|
||||||
MODULE_ALIAS("acpi*:WSTADEF:*");
|
MODULE_ALIAS("acpi*:WSTADEF:*");
|
||||||
|
MODULE_ALIAS("acpi*:AMDI0051:*");
|
||||||
|
|
||||||
static struct input_dev *hpwl_input_dev;
|
static struct input_dev *hpwl_input_dev;
|
||||||
|
|
||||||
static const struct acpi_device_id hpwl_ids[] = {
|
static const struct acpi_device_id hpwl_ids[] = {
|
||||||
{"HPQ6001", 0},
|
{"HPQ6001", 0},
|
||||||
{"WSTADEF", 0},
|
{"WSTADEF", 0},
|
||||||
|
{"AMDI0051", 0},
|
||||||
{"", 0},
|
{"", 0},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -101,6 +101,9 @@ MODULE_DEVICE_TABLE(acpi, lis3lv02d_device_ids);
|
||||||
static int lis3lv02d_acpi_init(struct lis3lv02d *lis3)
|
static int lis3lv02d_acpi_init(struct lis3lv02d *lis3)
|
||||||
{
|
{
|
||||||
struct acpi_device *dev = lis3->bus_priv;
|
struct acpi_device *dev = lis3->bus_priv;
|
||||||
|
if (!lis3->init_required)
|
||||||
|
return 0;
|
||||||
|
|
||||||
if (acpi_evaluate_object(dev->handle, METHOD_NAME__INI,
|
if (acpi_evaluate_object(dev->handle, METHOD_NAME__INI,
|
||||||
NULL, NULL) != AE_OK)
|
NULL, NULL) != AE_OK)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
@ -367,6 +370,7 @@ static int lis3lv02d_add(struct acpi_device *device)
|
||||||
}
|
}
|
||||||
|
|
||||||
/* call the core layer do its init */
|
/* call the core layer do its init */
|
||||||
|
lis3_dev.init_required = true;
|
||||||
ret = lis3lv02d_init_device(&lis3_dev);
|
ret = lis3lv02d_init_device(&lis3_dev);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -414,11 +418,27 @@ static int lis3lv02d_suspend(struct device *dev)
|
||||||
|
|
||||||
static int lis3lv02d_resume(struct device *dev)
|
static int lis3lv02d_resume(struct device *dev)
|
||||||
{
|
{
|
||||||
|
lis3_dev.init_required = false;
|
||||||
lis3lv02d_poweron(&lis3_dev);
|
lis3lv02d_poweron(&lis3_dev);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static SIMPLE_DEV_PM_OPS(hp_accel_pm, lis3lv02d_suspend, lis3lv02d_resume);
|
static int lis3lv02d_restore(struct device *dev)
|
||||||
|
{
|
||||||
|
lis3_dev.init_required = true;
|
||||||
|
lis3lv02d_poweron(&lis3_dev);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static const struct dev_pm_ops hp_accel_pm = {
|
||||||
|
.suspend = lis3lv02d_suspend,
|
||||||
|
.resume = lis3lv02d_resume,
|
||||||
|
.freeze = lis3lv02d_suspend,
|
||||||
|
.thaw = lis3lv02d_resume,
|
||||||
|
.poweroff = lis3lv02d_suspend,
|
||||||
|
.restore = lis3lv02d_restore,
|
||||||
|
};
|
||||||
|
|
||||||
#define HP_ACCEL_PM (&hp_accel_pm)
|
#define HP_ACCEL_PM (&hp_accel_pm)
|
||||||
#else
|
#else
|
||||||
#define HP_ACCEL_PM NULL
|
#define HP_ACCEL_PM NULL
|
||||||
|
|
|
@ -331,6 +331,7 @@ static const struct acpi_device_id punit_ipc_acpi_ids[] = {
|
||||||
{ "INT34D4", 0 },
|
{ "INT34D4", 0 },
|
||||||
{ }
|
{ }
|
||||||
};
|
};
|
||||||
|
MODULE_DEVICE_TABLE(acpi, punit_ipc_acpi_ids);
|
||||||
|
|
||||||
static struct platform_driver intel_punit_ipc_driver = {
|
static struct platform_driver intel_punit_ipc_driver = {
|
||||||
.probe = intel_punit_ipc_probe,
|
.probe = intel_punit_ipc_probe,
|
||||||
|
|
|
@ -3081,11 +3081,11 @@ static int blogic_qcmd_lck(struct scsi_cmnd *command,
|
||||||
ccb->opcode = BLOGIC_INITIATOR_CCB_SG;
|
ccb->opcode = BLOGIC_INITIATOR_CCB_SG;
|
||||||
ccb->datalen = count * sizeof(struct blogic_sg_seg);
|
ccb->datalen = count * sizeof(struct blogic_sg_seg);
|
||||||
if (blogic_multimaster_type(adapter))
|
if (blogic_multimaster_type(adapter))
|
||||||
ccb->data = (void *)((unsigned int) ccb->dma_handle +
|
ccb->data = (unsigned int) ccb->dma_handle +
|
||||||
((unsigned long) &ccb->sglist -
|
((unsigned long) &ccb->sglist -
|
||||||
(unsigned long) ccb));
|
(unsigned long) ccb);
|
||||||
else
|
else
|
||||||
ccb->data = ccb->sglist;
|
ccb->data = virt_to_32bit_virt(ccb->sglist);
|
||||||
|
|
||||||
scsi_for_each_sg(command, sg, count, i) {
|
scsi_for_each_sg(command, sg, count, i) {
|
||||||
ccb->sglist[i].segbytes = sg_dma_len(sg);
|
ccb->sglist[i].segbytes = sg_dma_len(sg);
|
||||||
|
|
|
@ -821,7 +821,7 @@ struct blogic_ccb {
|
||||||
unsigned char cdblen; /* Byte 2 */
|
unsigned char cdblen; /* Byte 2 */
|
||||||
unsigned char sense_datalen; /* Byte 3 */
|
unsigned char sense_datalen; /* Byte 3 */
|
||||||
u32 datalen; /* Bytes 4-7 */
|
u32 datalen; /* Bytes 4-7 */
|
||||||
void *data; /* Bytes 8-11 */
|
u32 data; /* Bytes 8-11 */
|
||||||
unsigned char:8; /* Byte 12 */
|
unsigned char:8; /* Byte 12 */
|
||||||
unsigned char:8; /* Byte 13 */
|
unsigned char:8; /* Byte 13 */
|
||||||
enum blogic_adapter_status adapter_status; /* Byte 14 */
|
enum blogic_adapter_status adapter_status; /* Byte 14 */
|
||||||
|
|
|
@ -41,7 +41,7 @@ static bool phy_is_wideport_member(struct asd_sas_port *port, struct asd_sas_phy
|
||||||
|
|
||||||
static void sas_resume_port(struct asd_sas_phy *phy)
|
static void sas_resume_port(struct asd_sas_phy *phy)
|
||||||
{
|
{
|
||||||
struct domain_device *dev;
|
struct domain_device *dev, *n;
|
||||||
struct asd_sas_port *port = phy->port;
|
struct asd_sas_port *port = phy->port;
|
||||||
struct sas_ha_struct *sas_ha = phy->ha;
|
struct sas_ha_struct *sas_ha = phy->ha;
|
||||||
struct sas_internal *si = to_sas_internal(sas_ha->core.shost->transportt);
|
struct sas_internal *si = to_sas_internal(sas_ha->core.shost->transportt);
|
||||||
|
@ -60,7 +60,7 @@ static void sas_resume_port(struct asd_sas_phy *phy)
|
||||||
* 1/ presume every device came back
|
* 1/ presume every device came back
|
||||||
* 2/ force the next revalidation to check all expander phys
|
* 2/ force the next revalidation to check all expander phys
|
||||||
*/
|
*/
|
||||||
list_for_each_entry(dev, &port->dev_list, dev_list_node) {
|
list_for_each_entry_safe(dev, n, &port->dev_list, dev_list_node) {
|
||||||
int i, rc;
|
int i, rc;
|
||||||
|
|
||||||
rc = sas_notify_lldd_dev_found(dev);
|
rc = sas_notify_lldd_dev_found(dev);
|
||||||
|
|
|
@ -382,7 +382,7 @@ static int spi_gpio_probe(struct platform_device *pdev)
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
master = spi_alloc_master(&pdev->dev, sizeof(*spi_gpio));
|
master = devm_spi_alloc_master(&pdev->dev, sizeof(*spi_gpio));
|
||||||
if (!master)
|
if (!master)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
@ -438,11 +438,7 @@ static int spi_gpio_probe(struct platform_device *pdev)
|
||||||
}
|
}
|
||||||
spi_gpio->bitbang.setup_transfer = spi_bitbang_setup_transfer;
|
spi_gpio->bitbang.setup_transfer = spi_bitbang_setup_transfer;
|
||||||
|
|
||||||
status = spi_bitbang_start(&spi_gpio->bitbang);
|
return spi_bitbang_start(&spi_gpio->bitbang);
|
||||||
if (status)
|
|
||||||
spi_master_put(master);
|
|
||||||
|
|
||||||
return status;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int spi_gpio_remove(struct platform_device *pdev)
|
static int spi_gpio_remove(struct platform_device *pdev)
|
||||||
|
|
|
@ -2148,7 +2148,7 @@ static int _nbu2ss_nuke(struct nbu2ss_udc *udc,
|
||||||
struct nbu2ss_ep *ep,
|
struct nbu2ss_ep *ep,
|
||||||
int status)
|
int status)
|
||||||
{
|
{
|
||||||
struct nbu2ss_req *req;
|
struct nbu2ss_req *req, *n;
|
||||||
|
|
||||||
/* Endpoint Disable */
|
/* Endpoint Disable */
|
||||||
_nbu2ss_epn_exit(udc, ep);
|
_nbu2ss_epn_exit(udc, ep);
|
||||||
|
@ -2160,7 +2160,7 @@ static int _nbu2ss_nuke(struct nbu2ss_udc *udc,
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
/* called with irqs blocked */
|
/* called with irqs blocked */
|
||||||
list_for_each_entry(req, &ep->queue, queue) {
|
list_for_each_entry_safe(req, n, &ep->queue, queue) {
|
||||||
_nbu2ss_ep_done(ep, req, status);
|
_nbu2ss_ep_done(ep, req, status);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -703,7 +703,6 @@ static int ad7746_probe(struct i2c_client *client,
|
||||||
indio_dev->num_channels = ARRAY_SIZE(ad7746_channels);
|
indio_dev->num_channels = ARRAY_SIZE(ad7746_channels);
|
||||||
else
|
else
|
||||||
indio_dev->num_channels = ARRAY_SIZE(ad7746_channels) - 2;
|
indio_dev->num_channels = ARRAY_SIZE(ad7746_channels) - 2;
|
||||||
indio_dev->num_channels = ARRAY_SIZE(ad7746_channels);
|
|
||||||
indio_dev->modes = INDIO_DIRECT_MODE;
|
indio_dev->modes = INDIO_DIRECT_MODE;
|
||||||
|
|
||||||
if (pdata) {
|
if (pdata) {
|
||||||
|
|
|
@ -452,7 +452,7 @@ static int mt7621_spi_probe(struct platform_device *pdev)
|
||||||
if (status)
|
if (status)
|
||||||
return status;
|
return status;
|
||||||
|
|
||||||
master = spi_alloc_master(&pdev->dev, sizeof(*rs));
|
master = devm_spi_alloc_master(&pdev->dev, sizeof(*rs));
|
||||||
if (master == NULL) {
|
if (master == NULL) {
|
||||||
dev_info(&pdev->dev, "master allocation failed\n");
|
dev_info(&pdev->dev, "master allocation failed\n");
|
||||||
clk_disable_unprepare(clk);
|
clk_disable_unprepare(clk);
|
||||||
|
@ -487,7 +487,11 @@ static int mt7621_spi_probe(struct platform_device *pdev)
|
||||||
|
|
||||||
mt7621_spi_reset(rs, 0);
|
mt7621_spi_reset(rs, 0);
|
||||||
|
|
||||||
return spi_register_master(master);
|
ret = spi_register_master(master);
|
||||||
|
if (ret)
|
||||||
|
clk_disable_unprepare(clk);
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mt7621_spi_remove(struct platform_device *pdev)
|
static int mt7621_spi_remove(struct platform_device *pdev)
|
||||||
|
@ -498,8 +502,8 @@ static int mt7621_spi_remove(struct platform_device *pdev)
|
||||||
master = dev_get_drvdata(&pdev->dev);
|
master = dev_get_drvdata(&pdev->dev);
|
||||||
rs = spi_master_get_devdata(master);
|
rs = spi_master_get_devdata(master);
|
||||||
|
|
||||||
clk_disable(rs->clk);
|
|
||||||
spi_unregister_master(master);
|
spi_unregister_master(master);
|
||||||
|
clk_disable_unprepare(rs->clk);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -367,15 +367,15 @@ int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
|
||||||
void *buf, size_t size)
|
void *buf, size_t size)
|
||||||
{
|
{
|
||||||
unsigned int retries = DMA_PORT_RETRIES;
|
unsigned int retries = DMA_PORT_RETRIES;
|
||||||
unsigned int offset;
|
|
||||||
|
|
||||||
offset = address & 3;
|
|
||||||
address = address & ~3;
|
|
||||||
|
|
||||||
do {
|
do {
|
||||||
u32 nbytes = min_t(u32, size, MAIL_DATA_DWORDS * 4);
|
unsigned int offset;
|
||||||
|
size_t nbytes;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
offset = address & 3;
|
||||||
|
nbytes = min_t(size_t, size + offset, MAIL_DATA_DWORDS * 4);
|
||||||
|
|
||||||
ret = dma_port_flash_read_block(dma, address, dma->buf,
|
ret = dma_port_flash_read_block(dma, address, dma->buf,
|
||||||
ALIGN(nbytes, 4));
|
ALIGN(nbytes, 4));
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
@ -387,6 +387,7 @@ int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
nbytes -= offset;
|
||||||
memcpy(buf, dma->buf + offset, nbytes);
|
memcpy(buf, dma->buf + offset, nbytes);
|
||||||
|
|
||||||
size -= nbytes;
|
size -= nbytes;
|
||||||
|
|
|
@ -1480,10 +1480,12 @@ static int __init max310x_uart_init(void)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
#ifdef CONFIG_SPI_MASTER
|
#ifdef CONFIG_SPI_MASTER
|
||||||
spi_register_driver(&max310x_spi_driver);
|
ret = spi_register_driver(&max310x_spi_driver);
|
||||||
|
if (ret)
|
||||||
|
uart_unregister_driver(&max310x_uart);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
return 0;
|
return ret;
|
||||||
}
|
}
|
||||||
module_init(max310x_uart_init);
|
module_init(max310x_uart_init);
|
||||||
|
|
||||||
|
|
|
@ -195,7 +195,6 @@ struct rp2_card {
|
||||||
void __iomem *bar0;
|
void __iomem *bar0;
|
||||||
void __iomem *bar1;
|
void __iomem *bar1;
|
||||||
spinlock_t card_lock;
|
spinlock_t card_lock;
|
||||||
struct completion fw_loaded;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
#define RP_ID(prod) PCI_VDEVICE(RP, (prod))
|
#define RP_ID(prod) PCI_VDEVICE(RP, (prod))
|
||||||
|
@ -664,17 +663,10 @@ static void rp2_remove_ports(struct rp2_card *card)
|
||||||
card->initialized_ports = 0;
|
card->initialized_ports = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void rp2_fw_cb(const struct firmware *fw, void *context)
|
static int rp2_load_firmware(struct rp2_card *card, const struct firmware *fw)
|
||||||
{
|
{
|
||||||
struct rp2_card *card = context;
|
|
||||||
resource_size_t phys_base;
|
resource_size_t phys_base;
|
||||||
int i, rc = -ENOENT;
|
int i, rc = 0;
|
||||||
|
|
||||||
if (!fw) {
|
|
||||||
dev_err(&card->pdev->dev, "cannot find '%s' firmware image\n",
|
|
||||||
RP2_FW_NAME);
|
|
||||||
goto no_fw;
|
|
||||||
}
|
|
||||||
|
|
||||||
phys_base = pci_resource_start(card->pdev, 1);
|
phys_base = pci_resource_start(card->pdev, 1);
|
||||||
|
|
||||||
|
@ -720,23 +712,13 @@ static void rp2_fw_cb(const struct firmware *fw, void *context)
|
||||||
card->initialized_ports++;
|
card->initialized_ports++;
|
||||||
}
|
}
|
||||||
|
|
||||||
release_firmware(fw);
|
return rc;
|
||||||
no_fw:
|
|
||||||
/*
|
|
||||||
* rp2_fw_cb() is called from a workqueue long after rp2_probe()
|
|
||||||
* has already returned success. So if something failed here,
|
|
||||||
* we'll just leave the now-dormant device in place until somebody
|
|
||||||
* unbinds it.
|
|
||||||
*/
|
|
||||||
if (rc)
|
|
||||||
dev_warn(&card->pdev->dev, "driver initialization failed\n");
|
|
||||||
|
|
||||||
complete(&card->fw_loaded);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int rp2_probe(struct pci_dev *pdev,
|
static int rp2_probe(struct pci_dev *pdev,
|
||||||
const struct pci_device_id *id)
|
const struct pci_device_id *id)
|
||||||
{
|
{
|
||||||
|
const struct firmware *fw;
|
||||||
struct rp2_card *card;
|
struct rp2_card *card;
|
||||||
struct rp2_uart_port *ports;
|
struct rp2_uart_port *ports;
|
||||||
void __iomem * const *bars;
|
void __iomem * const *bars;
|
||||||
|
@ -747,7 +729,6 @@ static int rp2_probe(struct pci_dev *pdev,
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
pci_set_drvdata(pdev, card);
|
pci_set_drvdata(pdev, card);
|
||||||
spin_lock_init(&card->card_lock);
|
spin_lock_init(&card->card_lock);
|
||||||
init_completion(&card->fw_loaded);
|
|
||||||
|
|
||||||
rc = pcim_enable_device(pdev);
|
rc = pcim_enable_device(pdev);
|
||||||
if (rc)
|
if (rc)
|
||||||
|
@ -780,22 +761,24 @@ static int rp2_probe(struct pci_dev *pdev,
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
card->ports = ports;
|
card->ports = ports;
|
||||||
|
|
||||||
|
rc = request_firmware(&fw, RP2_FW_NAME, &pdev->dev);
|
||||||
|
if (rc < 0) {
|
||||||
|
dev_err(&pdev->dev, "cannot find '%s' firmware image\n",
|
||||||
|
RP2_FW_NAME);
|
||||||
|
return rc;
|
||||||
|
}
|
||||||
|
|
||||||
|
rc = rp2_load_firmware(card, fw);
|
||||||
|
|
||||||
|
release_firmware(fw);
|
||||||
|
if (rc < 0)
|
||||||
|
return rc;
|
||||||
|
|
||||||
rc = devm_request_irq(&pdev->dev, pdev->irq, rp2_uart_interrupt,
|
rc = devm_request_irq(&pdev->dev, pdev->irq, rp2_uart_interrupt,
|
||||||
IRQF_SHARED, DRV_NAME, card);
|
IRQF_SHARED, DRV_NAME, card);
|
||||||
if (rc)
|
if (rc)
|
||||||
return rc;
|
return rc;
|
||||||
|
|
||||||
/*
|
|
||||||
* Only catastrophic errors (e.g. ENOMEM) are reported here.
|
|
||||||
* If the FW image is missing, we'll find out in rp2_fw_cb()
|
|
||||||
* and print an error message.
|
|
||||||
*/
|
|
||||||
rc = request_firmware_nowait(THIS_MODULE, 1, RP2_FW_NAME, &pdev->dev,
|
|
||||||
GFP_KERNEL, card, rp2_fw_cb);
|
|
||||||
if (rc)
|
|
||||||
return rc;
|
|
||||||
dev_dbg(&pdev->dev, "waiting for firmware blob...\n");
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -803,7 +786,6 @@ static void rp2_remove(struct pci_dev *pdev)
|
||||||
{
|
{
|
||||||
struct rp2_card *card = pci_get_drvdata(pdev);
|
struct rp2_card *card = pci_get_drvdata(pdev);
|
||||||
|
|
||||||
wait_for_completion(&card->fw_loaded);
|
|
||||||
rp2_remove_ports(card);
|
rp2_remove_ports(card);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1026,10 +1026,10 @@ static int scif_set_rtrg(struct uart_port *port, int rx_trig)
|
||||||
{
|
{
|
||||||
unsigned int bits;
|
unsigned int bits;
|
||||||
|
|
||||||
|
if (rx_trig >= port->fifosize)
|
||||||
|
rx_trig = port->fifosize - 1;
|
||||||
if (rx_trig < 1)
|
if (rx_trig < 1)
|
||||||
rx_trig = 1;
|
rx_trig = 1;
|
||||||
if (rx_trig >= port->fifosize)
|
|
||||||
rx_trig = port->fifosize;
|
|
||||||
|
|
||||||
/* HSCIF can be set to an arbitrary level. */
|
/* HSCIF can be set to an arbitrary level. */
|
||||||
if (sci_getreg(port, HSRTRGR)->size) {
|
if (sci_getreg(port, HSRTRGR)->size) {
|
||||||
|
|
|
@ -1189,7 +1189,12 @@ static int proc_bulk(struct usb_dev_state *ps, void __user *arg)
|
||||||
ret = usbfs_increase_memory_usage(len1 + sizeof(struct urb));
|
ret = usbfs_increase_memory_usage(len1 + sizeof(struct urb));
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
tbuf = kmalloc(len1, GFP_KERNEL);
|
|
||||||
|
/*
|
||||||
|
* len1 can be almost arbitrarily large. Don't WARN if it's
|
||||||
|
* too big, just fail the request.
|
||||||
|
*/
|
||||||
|
tbuf = kmalloc(len1, GFP_KERNEL | __GFP_NOWARN);
|
||||||
if (!tbuf) {
|
if (!tbuf) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
goto done;
|
goto done;
|
||||||
|
@ -1631,7 +1636,7 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
|
||||||
if (num_sgs) {
|
if (num_sgs) {
|
||||||
as->urb->sg = kmalloc_array(num_sgs,
|
as->urb->sg = kmalloc_array(num_sgs,
|
||||||
sizeof(struct scatterlist),
|
sizeof(struct scatterlist),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL | __GFP_NOWARN);
|
||||||
if (!as->urb->sg) {
|
if (!as->urb->sg) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
goto error;
|
goto error;
|
||||||
|
@ -1666,7 +1671,7 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
|
||||||
(uurb_start - as->usbm->vm_start);
|
(uurb_start - as->usbm->vm_start);
|
||||||
} else {
|
} else {
|
||||||
as->urb->transfer_buffer = kmalloc(uurb->buffer_length,
|
as->urb->transfer_buffer = kmalloc(uurb->buffer_length,
|
||||||
GFP_KERNEL);
|
GFP_KERNEL | __GFP_NOWARN);
|
||||||
if (!as->urb->transfer_buffer) {
|
if (!as->urb->transfer_buffer) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
goto error;
|
goto error;
|
||||||
|
|
|
@ -146,8 +146,10 @@ static inline unsigned hub_power_on_good_delay(struct usb_hub *hub)
|
||||||
{
|
{
|
||||||
unsigned delay = hub->descriptor->bPwrOn2PwrGood * 2;
|
unsigned delay = hub->descriptor->bPwrOn2PwrGood * 2;
|
||||||
|
|
||||||
/* Wait at least 100 msec for power to become stable */
|
if (!hub->hdev->parent) /* root hub */
|
||||||
return max(delay, 100U);
|
return delay;
|
||||||
|
else /* Wait at least 100 msec for power to become stable */
|
||||||
|
return max(delay, 100U);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline int hub_port_debounce_be_connected(struct usb_hub *hub,
|
static inline int hub_port_debounce_be_connected(struct usb_hub *hub,
|
||||||
|
|
|
@ -1162,6 +1162,7 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
|
||||||
req->start_sg = sg_next(s);
|
req->start_sg = sg_next(s);
|
||||||
|
|
||||||
req->num_queued_sgs++;
|
req->num_queued_sgs++;
|
||||||
|
req->num_pending_sgs--;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The number of pending SG entries may not correspond to the
|
* The number of pending SG entries may not correspond to the
|
||||||
|
@ -1169,7 +1170,7 @@ static void dwc3_prepare_one_trb_sg(struct dwc3_ep *dep,
|
||||||
* don't include unused SG entries.
|
* don't include unused SG entries.
|
||||||
*/
|
*/
|
||||||
if (length == 0) {
|
if (length == 0) {
|
||||||
req->num_pending_sgs -= req->request.num_mapped_sgs - req->num_queued_sgs;
|
req->num_pending_sgs = 0;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1834,6 +1835,10 @@ static void dwc3_gadget_enable_irq(struct dwc3 *dwc)
|
||||||
if (dwc->revision < DWC3_REVISION_250A)
|
if (dwc->revision < DWC3_REVISION_250A)
|
||||||
reg |= DWC3_DEVTEN_ULSTCNGEN;
|
reg |= DWC3_DEVTEN_ULSTCNGEN;
|
||||||
|
|
||||||
|
/* On 2.30a and above this bit enables U3/L2-L1 Suspend Events */
|
||||||
|
if (dwc->revision >= DWC3_REVISION_230A)
|
||||||
|
reg |= DWC3_DEVTEN_EOPFEN;
|
||||||
|
|
||||||
dwc3_writel(dwc->regs, DWC3_DEVTEN, reg);
|
dwc3_writel(dwc->regs, DWC3_DEVTEN, reg);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2357,15 +2362,15 @@ static int dwc3_gadget_ep_reclaim_trb_sg(struct dwc3_ep *dep,
|
||||||
struct dwc3_trb *trb = &dep->trb_pool[dep->trb_dequeue];
|
struct dwc3_trb *trb = &dep->trb_pool[dep->trb_dequeue];
|
||||||
struct scatterlist *sg = req->sg;
|
struct scatterlist *sg = req->sg;
|
||||||
struct scatterlist *s;
|
struct scatterlist *s;
|
||||||
unsigned int pending = req->num_pending_sgs;
|
unsigned int num_queued = req->num_queued_sgs;
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
for_each_sg(sg, s, pending, i) {
|
for_each_sg(sg, s, num_queued, i) {
|
||||||
trb = &dep->trb_pool[dep->trb_dequeue];
|
trb = &dep->trb_pool[dep->trb_dequeue];
|
||||||
|
|
||||||
req->sg = sg_next(s);
|
req->sg = sg_next(s);
|
||||||
req->num_pending_sgs--;
|
req->num_queued_sgs--;
|
||||||
|
|
||||||
ret = dwc3_gadget_ep_reclaim_completed_trb(dep, req,
|
ret = dwc3_gadget_ep_reclaim_completed_trb(dep, req,
|
||||||
trb, event, status, true);
|
trb, event, status, true);
|
||||||
|
@ -2388,7 +2393,7 @@ static int dwc3_gadget_ep_reclaim_trb_linear(struct dwc3_ep *dep,
|
||||||
|
|
||||||
static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req)
|
static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req)
|
||||||
{
|
{
|
||||||
return req->num_pending_sgs == 0;
|
return req->num_pending_sgs == 0 && req->num_queued_sgs == 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
|
static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
|
||||||
|
@ -2397,7 +2402,7 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (req->num_pending_sgs)
|
if (req->request.num_mapped_sgs)
|
||||||
ret = dwc3_gadget_ep_reclaim_trb_sg(dep, req, event,
|
ret = dwc3_gadget_ep_reclaim_trb_sg(dep, req, event,
|
||||||
status);
|
status);
|
||||||
else
|
else
|
||||||
|
|
|
@ -1466,7 +1466,7 @@ static void usb3_start_pipen(struct renesas_usb3_ep *usb3_ep,
|
||||||
struct renesas_usb3_request *usb3_req)
|
struct renesas_usb3_request *usb3_req)
|
||||||
{
|
{
|
||||||
struct renesas_usb3 *usb3 = usb3_ep_to_usb3(usb3_ep);
|
struct renesas_usb3 *usb3 = usb3_ep_to_usb3(usb3_ep);
|
||||||
struct renesas_usb3_request *usb3_req_first = usb3_get_request(usb3_ep);
|
struct renesas_usb3_request *usb3_req_first;
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
int ret = -EAGAIN;
|
int ret = -EAGAIN;
|
||||||
u32 enable_bits = 0;
|
u32 enable_bits = 0;
|
||||||
|
@ -1474,7 +1474,8 @@ static void usb3_start_pipen(struct renesas_usb3_ep *usb3_ep,
|
||||||
spin_lock_irqsave(&usb3->lock, flags);
|
spin_lock_irqsave(&usb3->lock, flags);
|
||||||
if (usb3_ep->halt || usb3_ep->started)
|
if (usb3_ep->halt || usb3_ep->started)
|
||||||
goto out;
|
goto out;
|
||||||
if (usb3_req != usb3_req_first)
|
usb3_req_first = __usb3_get_request(usb3_ep);
|
||||||
|
if (!usb3_req_first || usb3_req != usb3_req_first)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
if (usb3_pn_change(usb3, usb3_ep->num) < 0)
|
if (usb3_pn_change(usb3, usb3_ep->num) < 0)
|
||||||
|
|
|
@ -59,9 +59,9 @@ static ssize_t speed_store(struct device *dev, struct device_attribute *attr,
|
||||||
/* Set speed */
|
/* Set speed */
|
||||||
retval = usb_control_msg(tv->udev, usb_sndctrlpipe(tv->udev, 0),
|
retval = usb_control_msg(tv->udev, usb_sndctrlpipe(tv->udev, 0),
|
||||||
0x01, /* vendor request: set speed */
|
0x01, /* vendor request: set speed */
|
||||||
USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_OTHER,
|
USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_OTHER,
|
||||||
tv->speed, /* speed value */
|
tv->speed, /* speed value */
|
||||||
0, NULL, 0, USB_CTRL_GET_TIMEOUT);
|
0, NULL, 0, USB_CTRL_SET_TIMEOUT);
|
||||||
if (retval) {
|
if (retval) {
|
||||||
tv->speed = old;
|
tv->speed = old;
|
||||||
dev_dbg(&tv->udev->dev, "retval = %d\n", retval);
|
dev_dbg(&tv->udev->dev, "retval = %d\n", retval);
|
||||||
|
|
|
@ -736,6 +736,7 @@ static int uss720_probe(struct usb_interface *intf,
|
||||||
parport_announce_port(pp);
|
parport_announce_port(pp);
|
||||||
|
|
||||||
usb_set_intfdata(intf, pp);
|
usb_set_intfdata(intf, pp);
|
||||||
|
usb_put_dev(usbdev);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
probe_abort:
|
probe_abort:
|
||||||
|
|
|
@ -1024,6 +1024,9 @@ static const struct usb_device_id id_table_combined[] = {
|
||||||
/* Sienna devices */
|
/* Sienna devices */
|
||||||
{ USB_DEVICE(FTDI_VID, FTDI_SIENNA_PID) },
|
{ USB_DEVICE(FTDI_VID, FTDI_SIENNA_PID) },
|
||||||
{ USB_DEVICE(ECHELON_VID, ECHELON_U20_PID) },
|
{ USB_DEVICE(ECHELON_VID, ECHELON_U20_PID) },
|
||||||
|
/* IDS GmbH devices */
|
||||||
|
{ USB_DEVICE(IDS_VID, IDS_SI31A_PID) },
|
||||||
|
{ USB_DEVICE(IDS_VID, IDS_CM31A_PID) },
|
||||||
/* U-Blox devices */
|
/* U-Blox devices */
|
||||||
{ USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ZED_PID) },
|
{ USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ZED_PID) },
|
||||||
{ USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ODIN_PID) },
|
{ USB_DEVICE(UBLOX_VID, UBLOX_C099F9P_ODIN_PID) },
|
||||||
|
|
|
@ -1567,6 +1567,13 @@
|
||||||
#define UNJO_VID 0x22B7
|
#define UNJO_VID 0x22B7
|
||||||
#define UNJO_ISODEBUG_V1_PID 0x150D
|
#define UNJO_ISODEBUG_V1_PID 0x150D
|
||||||
|
|
||||||
|
/*
|
||||||
|
* IDS GmbH
|
||||||
|
*/
|
||||||
|
#define IDS_VID 0x2CAF
|
||||||
|
#define IDS_SI31A_PID 0x13A2
|
||||||
|
#define IDS_CM31A_PID 0x13A3
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* U-Blox products (http://www.u-blox.com).
|
* U-Blox products (http://www.u-blox.com).
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -1240,6 +1240,10 @@ static const struct usb_device_id option_ids[] = {
|
||||||
.driver_info = NCTRL(0) | RSVD(1) },
|
.driver_info = NCTRL(0) | RSVD(1) },
|
||||||
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff), /* Telit LN940 (MBIM) */
|
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff), /* Telit LN940 (MBIM) */
|
||||||
.driver_info = NCTRL(0) },
|
.driver_info = NCTRL(0) },
|
||||||
|
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7010, 0xff), /* Telit LE910-S1 (RNDIS) */
|
||||||
|
.driver_info = NCTRL(2) },
|
||||||
|
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x7011, 0xff), /* Telit LE910-S1 (ECM) */
|
||||||
|
.driver_info = NCTRL(2) },
|
||||||
{ USB_DEVICE(TELIT_VENDOR_ID, 0x9010), /* Telit SBL FN980 flashing device */
|
{ USB_DEVICE(TELIT_VENDOR_ID, 0x9010), /* Telit SBL FN980 flashing device */
|
||||||
.driver_info = NCTRL(0) | ZLP },
|
.driver_info = NCTRL(0) | ZLP },
|
||||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */
|
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */
|
||||||
|
|
|
@ -107,6 +107,7 @@ static const struct usb_device_id id_table[] = {
|
||||||
{ USB_DEVICE(SONY_VENDOR_ID, SONY_QN3USB_PRODUCT_ID) },
|
{ USB_DEVICE(SONY_VENDOR_ID, SONY_QN3USB_PRODUCT_ID) },
|
||||||
{ USB_DEVICE(SANWA_VENDOR_ID, SANWA_PRODUCT_ID) },
|
{ USB_DEVICE(SANWA_VENDOR_ID, SANWA_PRODUCT_ID) },
|
||||||
{ USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530_PRODUCT_ID) },
|
{ USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530_PRODUCT_ID) },
|
||||||
|
{ USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530GC_PRODUCT_ID) },
|
||||||
{ USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
|
{ USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
|
||||||
{ USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) },
|
{ USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) },
|
||||||
{ } /* Terminating entry */
|
{ } /* Terminating entry */
|
||||||
|
|
|
@ -152,6 +152,7 @@
|
||||||
/* ADLINK ND-6530 RS232,RS485 and RS422 adapter */
|
/* ADLINK ND-6530 RS232,RS485 and RS422 adapter */
|
||||||
#define ADLINK_VENDOR_ID 0x0b63
|
#define ADLINK_VENDOR_ID 0x0b63
|
||||||
#define ADLINK_ND6530_PRODUCT_ID 0x6530
|
#define ADLINK_ND6530_PRODUCT_ID 0x6530
|
||||||
|
#define ADLINK_ND6530GC_PRODUCT_ID 0x653a
|
||||||
|
|
||||||
/* SMART USB Serial Adapter */
|
/* SMART USB Serial Adapter */
|
||||||
#define SMART_VENDOR_ID 0x0b8c
|
#define SMART_VENDOR_ID 0x0b8c
|
||||||
|
|
|
@ -37,6 +37,7 @@
|
||||||
/* Vendor and product ids */
|
/* Vendor and product ids */
|
||||||
#define TI_VENDOR_ID 0x0451
|
#define TI_VENDOR_ID 0x0451
|
||||||
#define IBM_VENDOR_ID 0x04b3
|
#define IBM_VENDOR_ID 0x04b3
|
||||||
|
#define STARTECH_VENDOR_ID 0x14b0
|
||||||
#define TI_3410_PRODUCT_ID 0x3410
|
#define TI_3410_PRODUCT_ID 0x3410
|
||||||
#define IBM_4543_PRODUCT_ID 0x4543
|
#define IBM_4543_PRODUCT_ID 0x4543
|
||||||
#define IBM_454B_PRODUCT_ID 0x454b
|
#define IBM_454B_PRODUCT_ID 0x454b
|
||||||
|
@ -374,6 +375,7 @@ static const struct usb_device_id ti_id_table_3410[] = {
|
||||||
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1131_PRODUCT_ID) },
|
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1131_PRODUCT_ID) },
|
||||||
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1150_PRODUCT_ID) },
|
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1150_PRODUCT_ID) },
|
||||||
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1151_PRODUCT_ID) },
|
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1151_PRODUCT_ID) },
|
||||||
|
{ USB_DEVICE(STARTECH_VENDOR_ID, TI_3410_PRODUCT_ID) },
|
||||||
{ } /* terminator */
|
{ } /* terminator */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -412,6 +414,7 @@ static const struct usb_device_id ti_id_table_combined[] = {
|
||||||
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1131_PRODUCT_ID) },
|
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1131_PRODUCT_ID) },
|
||||||
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1150_PRODUCT_ID) },
|
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1150_PRODUCT_ID) },
|
||||||
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1151_PRODUCT_ID) },
|
{ USB_DEVICE(MXU1_VENDOR_ID, MXU1_1151_PRODUCT_ID) },
|
||||||
|
{ USB_DEVICE(STARTECH_VENDOR_ID, TI_3410_PRODUCT_ID) },
|
||||||
{ } /* terminator */
|
{ } /* terminator */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -1770,8 +1770,6 @@ static noinline int link_to_fixup_dir(struct btrfs_trans_handle *trans,
|
||||||
ret = btrfs_update_inode(trans, root, inode);
|
ret = btrfs_update_inode(trans, root, inode);
|
||||||
} else if (ret == -EEXIST) {
|
} else if (ret == -EEXIST) {
|
||||||
ret = 0;
|
ret = 0;
|
||||||
} else {
|
|
||||||
BUG(); /* Logic Error */
|
|
||||||
}
|
}
|
||||||
iput(inode);
|
iput(inode);
|
||||||
|
|
||||||
|
|
|
@ -791,6 +791,13 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses)
|
||||||
/* Internal types */
|
/* Internal types */
|
||||||
server->capabilities |= SMB2_NT_FIND | SMB2_LARGE_FILES;
|
server->capabilities |= SMB2_NT_FIND | SMB2_LARGE_FILES;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* SMB3.0 supports only 1 cipher and doesn't have a encryption neg context
|
||||||
|
* Set the cipher type manually.
|
||||||
|
*/
|
||||||
|
if (server->dialect == SMB30_PROT_ID && (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
|
||||||
|
server->cipher_type = SMB2_ENCRYPTION_AES128_CCM;
|
||||||
|
|
||||||
security_blob = smb2_get_data_area_len(&blob_offset, &blob_length,
|
security_blob = smb2_get_data_area_len(&blob_offset, &blob_length,
|
||||||
(struct smb2_sync_hdr *)rsp);
|
(struct smb2_sync_hdr *)rsp);
|
||||||
/*
|
/*
|
||||||
|
@ -3117,10 +3124,10 @@ smb2_new_read_req(void **buf, unsigned int *total_len,
|
||||||
* Related requests use info from previous read request
|
* Related requests use info from previous read request
|
||||||
* in chain.
|
* in chain.
|
||||||
*/
|
*/
|
||||||
shdr->SessionId = 0xFFFFFFFF;
|
shdr->SessionId = 0xFFFFFFFFFFFFFFFF;
|
||||||
shdr->TreeId = 0xFFFFFFFF;
|
shdr->TreeId = 0xFFFFFFFF;
|
||||||
req->PersistentFileId = 0xFFFFFFFF;
|
req->PersistentFileId = 0xFFFFFFFFFFFFFFFF;
|
||||||
req->VolatileFileId = 0xFFFFFFFF;
|
req->VolatileFileId = 0xFFFFFFFFFFFFFFFF;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (remaining_bytes > io_parms->length)
|
if (remaining_bytes > io_parms->length)
|
||||||
|
|
|
@ -426,7 +426,7 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
|
||||||
u32 hash;
|
u32 hash;
|
||||||
|
|
||||||
index = page->index;
|
index = page->index;
|
||||||
hash = hugetlb_fault_mutex_hash(h, mapping, index, 0);
|
hash = hugetlb_fault_mutex_hash(h, mapping, index);
|
||||||
mutex_lock(&hugetlb_fault_mutex_table[hash]);
|
mutex_lock(&hugetlb_fault_mutex_table[hash]);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -623,7 +623,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
|
||||||
addr = index * hpage_size;
|
addr = index * hpage_size;
|
||||||
|
|
||||||
/* mutex taken here, fault path and hole punch */
|
/* mutex taken here, fault path and hole punch */
|
||||||
hash = hugetlb_fault_mutex_hash(h, mapping, index, addr);
|
hash = hugetlb_fault_mutex_hash(h, mapping, index);
|
||||||
mutex_lock(&hugetlb_fault_mutex_table[hash]);
|
mutex_lock(&hugetlb_fault_mutex_table[hash]);
|
||||||
|
|
||||||
/* See if already present in mapping to avoid alloc/free */
|
/* See if already present in mapping to avoid alloc/free */
|
||||||
|
|
|
@ -717,7 +717,7 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo,
|
||||||
if (unlikely(!p))
|
if (unlikely(!p))
|
||||||
goto out_err;
|
goto out_err;
|
||||||
fl->fh_array[i]->size = be32_to_cpup(p++);
|
fl->fh_array[i]->size = be32_to_cpup(p++);
|
||||||
if (sizeof(struct nfs_fh) < fl->fh_array[i]->size) {
|
if (fl->fh_array[i]->size > NFS_MAXFHSIZE) {
|
||||||
printk(KERN_ERR "NFS: Too big fh %d received %d\n",
|
printk(KERN_ERR "NFS: Too big fh %d received %d\n",
|
||||||
i, fl->fh_array[i]->size);
|
i, fl->fh_array[i]->size);
|
||||||
goto out_err;
|
goto out_err;
|
||||||
|
|
|
@ -148,7 +148,7 @@ static loff_t nfs4_file_llseek(struct file *filep, loff_t offset, int whence)
|
||||||
case SEEK_HOLE:
|
case SEEK_HOLE:
|
||||||
case SEEK_DATA:
|
case SEEK_DATA:
|
||||||
ret = nfs42_proc_llseek(filep, offset, whence);
|
ret = nfs42_proc_llseek(filep, offset, whence);
|
||||||
if (ret != -ENOTSUPP)
|
if (ret != -EOPNOTSUPP)
|
||||||
return ret;
|
return ret;
|
||||||
/* Fall through */
|
/* Fall through */
|
||||||
default:
|
default:
|
||||||
|
|
|
@ -987,17 +987,16 @@ static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc)
|
||||||
{
|
{
|
||||||
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
|
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
|
||||||
|
|
||||||
|
|
||||||
if (!list_empty(&mirror->pg_list)) {
|
if (!list_empty(&mirror->pg_list)) {
|
||||||
int error = desc->pg_ops->pg_doio(desc);
|
int error = desc->pg_ops->pg_doio(desc);
|
||||||
if (error < 0)
|
if (error < 0)
|
||||||
desc->pg_error = error;
|
desc->pg_error = error;
|
||||||
else
|
if (list_empty(&mirror->pg_list)) {
|
||||||
mirror->pg_bytes_written += mirror->pg_count;
|
mirror->pg_bytes_written += mirror->pg_count;
|
||||||
}
|
mirror->pg_count = 0;
|
||||||
if (list_empty(&mirror->pg_list)) {
|
mirror->pg_base = 0;
|
||||||
mirror->pg_count = 0;
|
mirror->pg_recoalesce = 0;
|
||||||
mirror->pg_base = 0;
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1095,7 +1094,6 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
|
||||||
|
|
||||||
do {
|
do {
|
||||||
list_splice_init(&mirror->pg_list, &head);
|
list_splice_init(&mirror->pg_list, &head);
|
||||||
mirror->pg_bytes_written -= mirror->pg_count;
|
|
||||||
mirror->pg_count = 0;
|
mirror->pg_count = 0;
|
||||||
mirror->pg_base = 0;
|
mirror->pg_base = 0;
|
||||||
mirror->pg_recoalesce = 0;
|
mirror->pg_recoalesce = 0;
|
||||||
|
|
|
@ -1268,6 +1268,11 @@ _pnfs_return_layout(struct inode *ino)
|
||||||
{
|
{
|
||||||
struct pnfs_layout_hdr *lo = NULL;
|
struct pnfs_layout_hdr *lo = NULL;
|
||||||
struct nfs_inode *nfsi = NFS_I(ino);
|
struct nfs_inode *nfsi = NFS_I(ino);
|
||||||
|
struct pnfs_layout_range range = {
|
||||||
|
.iomode = IOMODE_ANY,
|
||||||
|
.offset = 0,
|
||||||
|
.length = NFS4_MAX_UINT64,
|
||||||
|
};
|
||||||
LIST_HEAD(tmp_list);
|
LIST_HEAD(tmp_list);
|
||||||
nfs4_stateid stateid;
|
nfs4_stateid stateid;
|
||||||
int status = 0;
|
int status = 0;
|
||||||
|
@ -1294,16 +1299,10 @@ _pnfs_return_layout(struct inode *ino)
|
||||||
}
|
}
|
||||||
valid_layout = pnfs_layout_is_valid(lo);
|
valid_layout = pnfs_layout_is_valid(lo);
|
||||||
pnfs_clear_layoutcommit(ino, &tmp_list);
|
pnfs_clear_layoutcommit(ino, &tmp_list);
|
||||||
pnfs_mark_matching_lsegs_return(lo, &tmp_list, NULL, 0);
|
pnfs_mark_matching_lsegs_return(lo, &tmp_list, &range, 0);
|
||||||
|
|
||||||
if (NFS_SERVER(ino)->pnfs_curr_ld->return_range) {
|
if (NFS_SERVER(ino)->pnfs_curr_ld->return_range)
|
||||||
struct pnfs_layout_range range = {
|
|
||||||
.iomode = IOMODE_ANY,
|
|
||||||
.offset = 0,
|
|
||||||
.length = NFS4_MAX_UINT64,
|
|
||||||
};
|
|
||||||
NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo, &range);
|
NFS_SERVER(ino)->pnfs_curr_ld->return_range(lo, &range);
|
||||||
}
|
|
||||||
|
|
||||||
/* Don't send a LAYOUTRETURN if list was initially empty */
|
/* Don't send a LAYOUTRETURN if list was initially empty */
|
||||||
if (!test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags) ||
|
if (!test_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags) ||
|
||||||
|
|
|
@ -2565,6 +2565,10 @@ static ssize_t proc_pid_attr_write(struct file * file, const char __user * buf,
|
||||||
void *page;
|
void *page;
|
||||||
int rv;
|
int rv;
|
||||||
|
|
||||||
|
/* A task may only write when it was the opener. */
|
||||||
|
if (file->f_cred != current_real_cred())
|
||||||
|
return -EPERM;
|
||||||
|
|
||||||
rcu_read_lock();
|
rcu_read_lock();
|
||||||
task = pid_task(proc_pid(inode), PIDTYPE_PID);
|
task = pid_task(proc_pid(inode), PIDTYPE_PID);
|
||||||
if (!task) {
|
if (!task) {
|
||||||
|
|
|
@ -144,10 +144,11 @@ struct bpf_verifier_state_list {
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Possible states for alu_state member. */
|
/* Possible states for alu_state member. */
|
||||||
#define BPF_ALU_SANITIZE_SRC 1U
|
#define BPF_ALU_SANITIZE_SRC (1U << 0)
|
||||||
#define BPF_ALU_SANITIZE_DST 2U
|
#define BPF_ALU_SANITIZE_DST (1U << 1)
|
||||||
#define BPF_ALU_NEG_VALUE (1U << 2)
|
#define BPF_ALU_NEG_VALUE (1U << 2)
|
||||||
#define BPF_ALU_NON_POINTER (1U << 3)
|
#define BPF_ALU_NON_POINTER (1U << 3)
|
||||||
|
#define BPF_ALU_IMMEDIATE (1U << 4)
|
||||||
#define BPF_ALU_SANITIZE (BPF_ALU_SANITIZE_SRC | \
|
#define BPF_ALU_SANITIZE (BPF_ALU_SANITIZE_SRC | \
|
||||||
BPF_ALU_SANITIZE_DST)
|
BPF_ALU_SANITIZE_DST)
|
||||||
|
|
||||||
|
|
|
@ -124,7 +124,7 @@ void free_huge_page(struct page *page);
|
||||||
void hugetlb_fix_reserve_counts(struct inode *inode);
|
void hugetlb_fix_reserve_counts(struct inode *inode);
|
||||||
extern struct mutex *hugetlb_fault_mutex_table;
|
extern struct mutex *hugetlb_fault_mutex_table;
|
||||||
u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
|
u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
|
||||||
pgoff_t idx, unsigned long address);
|
pgoff_t idx);
|
||||||
|
|
||||||
pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud);
|
pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud);
|
||||||
|
|
||||||
|
|
|
@ -4788,7 +4788,7 @@ unsigned int ieee80211_get_mesh_hdrlen(struct ieee80211s_hdr *meshhdr);
|
||||||
*/
|
*/
|
||||||
int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
|
int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
|
||||||
const u8 *addr, enum nl80211_iftype iftype,
|
const u8 *addr, enum nl80211_iftype iftype,
|
||||||
u8 data_offset);
|
u8 data_offset, bool is_amsdu);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* ieee80211_data_to_8023 - convert an 802.11 data frame to 802.3
|
* ieee80211_data_to_8023 - convert an 802.11 data frame to 802.3
|
||||||
|
@ -4800,7 +4800,7 @@ int ieee80211_data_to_8023_exthdr(struct sk_buff *skb, struct ethhdr *ehdr,
|
||||||
static inline int ieee80211_data_to_8023(struct sk_buff *skb, const u8 *addr,
|
static inline int ieee80211_data_to_8023(struct sk_buff *skb, const u8 *addr,
|
||||||
enum nl80211_iftype iftype)
|
enum nl80211_iftype iftype)
|
||||||
{
|
{
|
||||||
return ieee80211_data_to_8023_exthdr(skb, NULL, addr, iftype, 0);
|
return ieee80211_data_to_8023_exthdr(skb, NULL, addr, iftype, 0, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -310,6 +310,7 @@ int nci_nfcc_loopback(struct nci_dev *ndev, void *data, size_t data_len,
|
||||||
struct sk_buff **resp);
|
struct sk_buff **resp);
|
||||||
|
|
||||||
struct nci_hci_dev *nci_hci_allocate(struct nci_dev *ndev);
|
struct nci_hci_dev *nci_hci_allocate(struct nci_dev *ndev);
|
||||||
|
void nci_hci_deallocate(struct nci_dev *ndev);
|
||||||
int nci_hci_send_event(struct nci_dev *ndev, u8 gate, u8 event,
|
int nci_hci_send_event(struct nci_dev *ndev, u8 gate, u8 event,
|
||||||
const u8 *param, size_t param_len);
|
const u8 *param, size_t param_len);
|
||||||
int nci_hci_send_cmd(struct nci_dev *ndev, u8 gate,
|
int nci_hci_send_cmd(struct nci_dev *ndev, u8 gate,
|
||||||
|
|
|
@ -2729,37 +2729,43 @@ static struct bpf_insn_aux_data *cur_aux(struct bpf_verifier_env *env)
|
||||||
return &env->insn_aux_data[env->insn_idx];
|
return &env->insn_aux_data[env->insn_idx];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
enum {
|
||||||
|
REASON_BOUNDS = -1,
|
||||||
|
REASON_TYPE = -2,
|
||||||
|
REASON_PATHS = -3,
|
||||||
|
REASON_LIMIT = -4,
|
||||||
|
REASON_STACK = -5,
|
||||||
|
};
|
||||||
|
|
||||||
static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
|
static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
|
||||||
u32 *ptr_limit, u8 opcode, bool off_is_neg)
|
u32 *alu_limit, bool mask_to_left)
|
||||||
{
|
{
|
||||||
bool mask_to_left = (opcode == BPF_ADD && off_is_neg) ||
|
u32 max = 0, ptr_limit = 0;
|
||||||
(opcode == BPF_SUB && !off_is_neg);
|
|
||||||
u32 off, max;
|
|
||||||
|
|
||||||
switch (ptr_reg->type) {
|
switch (ptr_reg->type) {
|
||||||
case PTR_TO_STACK:
|
case PTR_TO_STACK:
|
||||||
/* Offset 0 is out-of-bounds, but acceptable start for the
|
/* Offset 0 is out-of-bounds, but acceptable start for the
|
||||||
* left direction, see BPF_REG_FP.
|
* left direction, see BPF_REG_FP. Also, unknown scalar
|
||||||
|
* offset where we would need to deal with min/max bounds is
|
||||||
|
* currently prohibited for unprivileged.
|
||||||
*/
|
*/
|
||||||
max = MAX_BPF_STACK + mask_to_left;
|
max = MAX_BPF_STACK + mask_to_left;
|
||||||
off = ptr_reg->off + ptr_reg->var_off.value;
|
ptr_limit = -(ptr_reg->var_off.value + ptr_reg->off);
|
||||||
if (mask_to_left)
|
break;
|
||||||
*ptr_limit = MAX_BPF_STACK + off;
|
|
||||||
else
|
|
||||||
*ptr_limit = -off - 1;
|
|
||||||
return *ptr_limit >= max ? -ERANGE : 0;
|
|
||||||
case PTR_TO_MAP_VALUE:
|
case PTR_TO_MAP_VALUE:
|
||||||
max = ptr_reg->map_ptr->value_size;
|
max = ptr_reg->map_ptr->value_size;
|
||||||
if (mask_to_left) {
|
ptr_limit = (mask_to_left ?
|
||||||
*ptr_limit = ptr_reg->umax_value + ptr_reg->off;
|
ptr_reg->smin_value :
|
||||||
} else {
|
ptr_reg->umax_value) + ptr_reg->off;
|
||||||
off = ptr_reg->smin_value + ptr_reg->off;
|
break;
|
||||||
*ptr_limit = ptr_reg->map_ptr->value_size - off - 1;
|
|
||||||
}
|
|
||||||
return *ptr_limit >= max ? -ERANGE : 0;
|
|
||||||
default:
|
default:
|
||||||
return -EINVAL;
|
return REASON_TYPE;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (ptr_limit >= max)
|
||||||
|
return REASON_LIMIT;
|
||||||
|
*alu_limit = ptr_limit;
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool can_skip_alu_sanitation(const struct bpf_verifier_env *env,
|
static bool can_skip_alu_sanitation(const struct bpf_verifier_env *env,
|
||||||
|
@ -2777,7 +2783,7 @@ static int update_alu_sanitation_state(struct bpf_insn_aux_data *aux,
|
||||||
if (aux->alu_state &&
|
if (aux->alu_state &&
|
||||||
(aux->alu_state != alu_state ||
|
(aux->alu_state != alu_state ||
|
||||||
aux->alu_limit != alu_limit))
|
aux->alu_limit != alu_limit))
|
||||||
return -EACCES;
|
return REASON_PATHS;
|
||||||
|
|
||||||
/* Corresponding fixup done in fixup_bpf_calls(). */
|
/* Corresponding fixup done in fixup_bpf_calls(). */
|
||||||
aux->alu_state = alu_state;
|
aux->alu_state = alu_state;
|
||||||
|
@ -2796,14 +2802,28 @@ static int sanitize_val_alu(struct bpf_verifier_env *env,
|
||||||
return update_alu_sanitation_state(aux, BPF_ALU_NON_POINTER, 0);
|
return update_alu_sanitation_state(aux, BPF_ALU_NON_POINTER, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool sanitize_needed(u8 opcode)
|
||||||
|
{
|
||||||
|
return opcode == BPF_ADD || opcode == BPF_SUB;
|
||||||
|
}
|
||||||
|
|
||||||
|
struct bpf_sanitize_info {
|
||||||
|
struct bpf_insn_aux_data aux;
|
||||||
|
bool mask_to_left;
|
||||||
|
};
|
||||||
|
|
||||||
static int sanitize_ptr_alu(struct bpf_verifier_env *env,
|
static int sanitize_ptr_alu(struct bpf_verifier_env *env,
|
||||||
struct bpf_insn *insn,
|
struct bpf_insn *insn,
|
||||||
const struct bpf_reg_state *ptr_reg,
|
const struct bpf_reg_state *ptr_reg,
|
||||||
|
const struct bpf_reg_state *off_reg,
|
||||||
struct bpf_reg_state *dst_reg,
|
struct bpf_reg_state *dst_reg,
|
||||||
bool off_is_neg)
|
struct bpf_sanitize_info *info,
|
||||||
|
const bool commit_window)
|
||||||
{
|
{
|
||||||
|
struct bpf_insn_aux_data *aux = commit_window ? cur_aux(env) : &info->aux;
|
||||||
struct bpf_verifier_state *vstate = env->cur_state;
|
struct bpf_verifier_state *vstate = env->cur_state;
|
||||||
struct bpf_insn_aux_data *aux = cur_aux(env);
|
bool off_is_imm = tnum_is_const(off_reg->var_off);
|
||||||
|
bool off_is_neg = off_reg->smin_value < 0;
|
||||||
bool ptr_is_dst_reg = ptr_reg == dst_reg;
|
bool ptr_is_dst_reg = ptr_reg == dst_reg;
|
||||||
u8 opcode = BPF_OP(insn->code);
|
u8 opcode = BPF_OP(insn->code);
|
||||||
u32 alu_state, alu_limit;
|
u32 alu_state, alu_limit;
|
||||||
|
@ -2821,18 +2841,47 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
|
||||||
if (vstate->speculative)
|
if (vstate->speculative)
|
||||||
goto do_sim;
|
goto do_sim;
|
||||||
|
|
||||||
alu_state = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
|
if (!commit_window) {
|
||||||
alu_state |= ptr_is_dst_reg ?
|
if (!tnum_is_const(off_reg->var_off) &&
|
||||||
BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
|
(off_reg->smin_value < 0) != (off_reg->smax_value < 0))
|
||||||
|
return REASON_BOUNDS;
|
||||||
|
|
||||||
err = retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg);
|
info->mask_to_left = (opcode == BPF_ADD && off_is_neg) ||
|
||||||
|
(opcode == BPF_SUB && !off_is_neg);
|
||||||
|
}
|
||||||
|
|
||||||
|
err = retrieve_ptr_limit(ptr_reg, &alu_limit, info->mask_to_left);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
|
if (commit_window) {
|
||||||
|
/* In commit phase we narrow the masking window based on
|
||||||
|
* the observed pointer move after the simulated operation.
|
||||||
|
*/
|
||||||
|
alu_state = info->aux.alu_state;
|
||||||
|
alu_limit = abs(info->aux.alu_limit - alu_limit);
|
||||||
|
} else {
|
||||||
|
alu_state = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
|
||||||
|
alu_state |= off_is_imm ? BPF_ALU_IMMEDIATE : 0;
|
||||||
|
alu_state |= ptr_is_dst_reg ?
|
||||||
|
BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
|
||||||
|
}
|
||||||
|
|
||||||
err = update_alu_sanitation_state(aux, alu_state, alu_limit);
|
err = update_alu_sanitation_state(aux, alu_state, alu_limit);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
return err;
|
return err;
|
||||||
do_sim:
|
do_sim:
|
||||||
|
/* If we're in commit phase, we're done here given we already
|
||||||
|
* pushed the truncated dst_reg into the speculative verification
|
||||||
|
* stack.
|
||||||
|
*
|
||||||
|
* Also, when register is a known constant, we rewrite register-based
|
||||||
|
* operation to immediate-based, and thus do not need masking (and as
|
||||||
|
* a consequence, do not need to simulate the zero-truncation either).
|
||||||
|
*/
|
||||||
|
if (commit_window || off_is_imm)
|
||||||
|
return 0;
|
||||||
|
|
||||||
/* Simulate and find potential out-of-bounds access under
|
/* Simulate and find potential out-of-bounds access under
|
||||||
* speculative execution from truncation as a result of
|
* speculative execution from truncation as a result of
|
||||||
* masking when off was not within expected range. If off
|
* masking when off was not within expected range. If off
|
||||||
|
@ -2849,7 +2898,81 @@ do_sim:
|
||||||
ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true);
|
ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true);
|
||||||
if (!ptr_is_dst_reg && ret)
|
if (!ptr_is_dst_reg && ret)
|
||||||
*dst_reg = tmp;
|
*dst_reg = tmp;
|
||||||
return !ret ? -EFAULT : 0;
|
return !ret ? REASON_STACK : 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int sanitize_err(struct bpf_verifier_env *env,
|
||||||
|
const struct bpf_insn *insn, int reason,
|
||||||
|
const struct bpf_reg_state *off_reg,
|
||||||
|
const struct bpf_reg_state *dst_reg)
|
||||||
|
{
|
||||||
|
static const char *err = "pointer arithmetic with it prohibited for !root";
|
||||||
|
const char *op = BPF_OP(insn->code) == BPF_ADD ? "add" : "sub";
|
||||||
|
u32 dst = insn->dst_reg, src = insn->src_reg;
|
||||||
|
|
||||||
|
switch (reason) {
|
||||||
|
case REASON_BOUNDS:
|
||||||
|
verbose(env, "R%d has unknown scalar with mixed signed bounds, %s\n",
|
||||||
|
off_reg == dst_reg ? dst : src, err);
|
||||||
|
break;
|
||||||
|
case REASON_TYPE:
|
||||||
|
verbose(env, "R%d has pointer with unsupported alu operation, %s\n",
|
||||||
|
off_reg == dst_reg ? src : dst, err);
|
||||||
|
break;
|
||||||
|
case REASON_PATHS:
|
||||||
|
verbose(env, "R%d tried to %s from different maps, paths or scalars, %s\n",
|
||||||
|
dst, op, err);
|
||||||
|
break;
|
||||||
|
case REASON_LIMIT:
|
||||||
|
verbose(env, "R%d tried to %s beyond pointer bounds, %s\n",
|
||||||
|
dst, op, err);
|
||||||
|
break;
|
||||||
|
case REASON_STACK:
|
||||||
|
verbose(env, "R%d could not be pushed for speculative verification, %s\n",
|
||||||
|
dst, err);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
verbose(env, "verifier internal error: unknown reason (%d)\n",
|
||||||
|
reason);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
return -EACCES;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int sanitize_check_bounds(struct bpf_verifier_env *env,
|
||||||
|
const struct bpf_insn *insn,
|
||||||
|
const struct bpf_reg_state *dst_reg)
|
||||||
|
{
|
||||||
|
u32 dst = insn->dst_reg;
|
||||||
|
|
||||||
|
/* For unprivileged we require that resulting offset must be in bounds
|
||||||
|
* in order to be able to sanitize access later on.
|
||||||
|
*/
|
||||||
|
if (env->allow_ptr_leaks)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
switch (dst_reg->type) {
|
||||||
|
case PTR_TO_STACK:
|
||||||
|
if (check_stack_access(env, dst_reg, dst_reg->off +
|
||||||
|
dst_reg->var_off.value, 1)) {
|
||||||
|
verbose(env, "R%d stack pointer arithmetic goes out of range, "
|
||||||
|
"prohibited for !root\n", dst);
|
||||||
|
return -EACCES;
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
case PTR_TO_MAP_VALUE:
|
||||||
|
if (check_map_access(env, dst, dst_reg->off, 1, false)) {
|
||||||
|
verbose(env, "R%d pointer arithmetic of map value goes out of range, "
|
||||||
|
"prohibited for !root\n", dst);
|
||||||
|
return -EACCES;
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Handles arithmetic on a pointer and a scalar: computes new min/max and var_off.
|
/* Handles arithmetic on a pointer and a scalar: computes new min/max and var_off.
|
||||||
|
@ -2870,8 +2993,9 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
|
||||||
smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value;
|
smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value;
|
||||||
u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value,
|
u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value,
|
||||||
umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value;
|
umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value;
|
||||||
u32 dst = insn->dst_reg, src = insn->src_reg;
|
struct bpf_sanitize_info info = {};
|
||||||
u8 opcode = BPF_OP(insn->code);
|
u8 opcode = BPF_OP(insn->code);
|
||||||
|
u32 dst = insn->dst_reg;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
dst_reg = ®s[dst];
|
dst_reg = ®s[dst];
|
||||||
|
@ -2908,12 +3032,6 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
|
||||||
dst);
|
dst);
|
||||||
return -EACCES;
|
return -EACCES;
|
||||||
}
|
}
|
||||||
if (ptr_reg->type == PTR_TO_MAP_VALUE &&
|
|
||||||
!env->allow_ptr_leaks && !known && (smin_val < 0) != (smax_val < 0)) {
|
|
||||||
verbose(env, "R%d has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root\n",
|
|
||||||
off_reg == dst_reg ? dst : src);
|
|
||||||
return -EACCES;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* In case of 'scalar += pointer', dst_reg inherits pointer type and id.
|
/* In case of 'scalar += pointer', dst_reg inherits pointer type and id.
|
||||||
* The id may be overwritten later if we create a new variable offset.
|
* The id may be overwritten later if we create a new variable offset.
|
||||||
|
@ -2925,13 +3043,15 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
|
||||||
!check_reg_sane_offset(env, ptr_reg, ptr_reg->type))
|
!check_reg_sane_offset(env, ptr_reg, ptr_reg->type))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
if (sanitize_needed(opcode)) {
|
||||||
|
ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg,
|
||||||
|
&info, false);
|
||||||
|
if (ret < 0)
|
||||||
|
return sanitize_err(env, insn, ret, off_reg, dst_reg);
|
||||||
|
}
|
||||||
|
|
||||||
switch (opcode) {
|
switch (opcode) {
|
||||||
case BPF_ADD:
|
case BPF_ADD:
|
||||||
ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
|
|
||||||
if (ret < 0) {
|
|
||||||
verbose(env, "R%d tried to add from different maps, paths, or prohibited types\n", dst);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
/* We can take a fixed offset as long as it doesn't overflow
|
/* We can take a fixed offset as long as it doesn't overflow
|
||||||
* the s32 'off' field
|
* the s32 'off' field
|
||||||
*/
|
*/
|
||||||
|
@ -2982,11 +3102,6 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case BPF_SUB:
|
case BPF_SUB:
|
||||||
ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);
|
|
||||||
if (ret < 0) {
|
|
||||||
verbose(env, "R%d tried to sub from different maps, paths, or prohibited types\n", dst);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
if (dst_reg == off_reg) {
|
if (dst_reg == off_reg) {
|
||||||
/* scalar -= pointer. Creates an unknown scalar */
|
/* scalar -= pointer. Creates an unknown scalar */
|
||||||
verbose(env, "R%d tried to subtract pointer from scalar\n",
|
verbose(env, "R%d tried to subtract pointer from scalar\n",
|
||||||
|
@ -3067,22 +3182,13 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
|
||||||
__reg_deduce_bounds(dst_reg);
|
__reg_deduce_bounds(dst_reg);
|
||||||
__reg_bound_offset(dst_reg);
|
__reg_bound_offset(dst_reg);
|
||||||
|
|
||||||
/* For unprivileged we require that resulting offset must be in bounds
|
if (sanitize_check_bounds(env, insn, dst_reg) < 0)
|
||||||
* in order to be able to sanitize access later on.
|
return -EACCES;
|
||||||
*/
|
if (sanitize_needed(opcode)) {
|
||||||
if (!env->allow_ptr_leaks) {
|
ret = sanitize_ptr_alu(env, insn, dst_reg, off_reg, dst_reg,
|
||||||
if (dst_reg->type == PTR_TO_MAP_VALUE &&
|
&info, true);
|
||||||
check_map_access(env, dst, dst_reg->off, 1, false)) {
|
if (ret < 0)
|
||||||
verbose(env, "R%d pointer arithmetic of map value goes out of range, "
|
return sanitize_err(env, insn, ret, off_reg, dst_reg);
|
||||||
"prohibited for !root\n", dst);
|
|
||||||
return -EACCES;
|
|
||||||
} else if (dst_reg->type == PTR_TO_STACK &&
|
|
||||||
check_stack_access(env, dst_reg, dst_reg->off +
|
|
||||||
dst_reg->var_off.value, 1)) {
|
|
||||||
verbose(env, "R%d stack pointer arithmetic goes out of range, "
|
|
||||||
"prohibited for !root\n", dst);
|
|
||||||
return -EACCES;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -3103,7 +3209,6 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
|
||||||
s64 smin_val, smax_val;
|
s64 smin_val, smax_val;
|
||||||
u64 umin_val, umax_val;
|
u64 umin_val, umax_val;
|
||||||
u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
|
u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;
|
||||||
u32 dst = insn->dst_reg;
|
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (insn_bitness == 32) {
|
if (insn_bitness == 32) {
|
||||||
|
@ -3137,13 +3242,14 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (sanitize_needed(opcode)) {
|
||||||
|
ret = sanitize_val_alu(env, insn);
|
||||||
|
if (ret < 0)
|
||||||
|
return sanitize_err(env, insn, ret, NULL, NULL);
|
||||||
|
}
|
||||||
|
|
||||||
switch (opcode) {
|
switch (opcode) {
|
||||||
case BPF_ADD:
|
case BPF_ADD:
|
||||||
ret = sanitize_val_alu(env, insn);
|
|
||||||
if (ret < 0) {
|
|
||||||
verbose(env, "R%d tried to add from different pointers or scalars\n", dst);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
if (signed_add_overflows(dst_reg->smin_value, smin_val) ||
|
if (signed_add_overflows(dst_reg->smin_value, smin_val) ||
|
||||||
signed_add_overflows(dst_reg->smax_value, smax_val)) {
|
signed_add_overflows(dst_reg->smax_value, smax_val)) {
|
||||||
dst_reg->smin_value = S64_MIN;
|
dst_reg->smin_value = S64_MIN;
|
||||||
|
@ -3163,11 +3269,6 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
|
||||||
dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);
|
dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);
|
||||||
break;
|
break;
|
||||||
case BPF_SUB:
|
case BPF_SUB:
|
||||||
ret = sanitize_val_alu(env, insn);
|
|
||||||
if (ret < 0) {
|
|
||||||
verbose(env, "R%d tried to sub from different pointers or scalars\n", dst);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
if (signed_sub_overflows(dst_reg->smin_value, smax_val) ||
|
if (signed_sub_overflows(dst_reg->smin_value, smax_val) ||
|
||||||
signed_sub_overflows(dst_reg->smax_value, smin_val)) {
|
signed_sub_overflows(dst_reg->smax_value, smin_val)) {
|
||||||
/* Overflow possible, we know nothing */
|
/* Overflow possible, we know nothing */
|
||||||
|
@ -4127,8 +4228,9 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
|
||||||
struct bpf_verifier_state *this_branch = env->cur_state;
|
struct bpf_verifier_state *this_branch = env->cur_state;
|
||||||
struct bpf_verifier_state *other_branch;
|
struct bpf_verifier_state *other_branch;
|
||||||
struct bpf_reg_state *regs = this_branch->frame[this_branch->curframe]->regs;
|
struct bpf_reg_state *regs = this_branch->frame[this_branch->curframe]->regs;
|
||||||
struct bpf_reg_state *dst_reg, *other_branch_regs;
|
struct bpf_reg_state *dst_reg, *other_branch_regs, *src_reg = NULL;
|
||||||
u8 opcode = BPF_OP(insn->code);
|
u8 opcode = BPF_OP(insn->code);
|
||||||
|
int pred = -1;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
if (opcode > BPF_JSLE) {
|
if (opcode > BPF_JSLE) {
|
||||||
|
@ -4152,6 +4254,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
|
||||||
insn->src_reg);
|
insn->src_reg);
|
||||||
return -EACCES;
|
return -EACCES;
|
||||||
}
|
}
|
||||||
|
src_reg = ®s[insn->src_reg];
|
||||||
} else {
|
} else {
|
||||||
if (insn->src_reg != BPF_REG_0) {
|
if (insn->src_reg != BPF_REG_0) {
|
||||||
verbose(env, "BPF_JMP uses reserved fields\n");
|
verbose(env, "BPF_JMP uses reserved fields\n");
|
||||||
|
@ -4166,19 +4269,21 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
|
||||||
|
|
||||||
dst_reg = ®s[insn->dst_reg];
|
dst_reg = ®s[insn->dst_reg];
|
||||||
|
|
||||||
if (BPF_SRC(insn->code) == BPF_K) {
|
if (BPF_SRC(insn->code) == BPF_K)
|
||||||
int pred = is_branch_taken(dst_reg, insn->imm, opcode);
|
pred = is_branch_taken(dst_reg, insn->imm, opcode);
|
||||||
|
else if (src_reg->type == SCALAR_VALUE &&
|
||||||
if (pred == 1) {
|
tnum_is_const(src_reg->var_off))
|
||||||
/* only follow the goto, ignore fall-through */
|
pred = is_branch_taken(dst_reg, src_reg->var_off.value,
|
||||||
*insn_idx += insn->off;
|
opcode);
|
||||||
return 0;
|
if (pred == 1) {
|
||||||
} else if (pred == 0) {
|
/* only follow the goto, ignore fall-through */
|
||||||
/* only follow fall-through branch, since
|
*insn_idx += insn->off;
|
||||||
* that's where the program will go
|
return 0;
|
||||||
*/
|
} else if (pred == 0) {
|
||||||
return 0;
|
/* only follow fall-through branch, since
|
||||||
}
|
* that's where the program will go
|
||||||
|
*/
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
other_branch = push_stack(env, *insn_idx + insn->off + 1, *insn_idx,
|
other_branch = push_stack(env, *insn_idx + insn->off + 1, *insn_idx,
|
||||||
|
@ -6079,7 +6184,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
|
||||||
const u8 code_sub = BPF_ALU64 | BPF_SUB | BPF_X;
|
const u8 code_sub = BPF_ALU64 | BPF_SUB | BPF_X;
|
||||||
struct bpf_insn insn_buf[16];
|
struct bpf_insn insn_buf[16];
|
||||||
struct bpf_insn *patch = &insn_buf[0];
|
struct bpf_insn *patch = &insn_buf[0];
|
||||||
bool issrc, isneg;
|
bool issrc, isneg, isimm;
|
||||||
u32 off_reg;
|
u32 off_reg;
|
||||||
|
|
||||||
aux = &env->insn_aux_data[i + delta];
|
aux = &env->insn_aux_data[i + delta];
|
||||||
|
@ -6090,16 +6195,21 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
|
||||||
isneg = aux->alu_state & BPF_ALU_NEG_VALUE;
|
isneg = aux->alu_state & BPF_ALU_NEG_VALUE;
|
||||||
issrc = (aux->alu_state & BPF_ALU_SANITIZE) ==
|
issrc = (aux->alu_state & BPF_ALU_SANITIZE) ==
|
||||||
BPF_ALU_SANITIZE_SRC;
|
BPF_ALU_SANITIZE_SRC;
|
||||||
|
isimm = aux->alu_state & BPF_ALU_IMMEDIATE;
|
||||||
|
|
||||||
off_reg = issrc ? insn->src_reg : insn->dst_reg;
|
off_reg = issrc ? insn->src_reg : insn->dst_reg;
|
||||||
if (isneg)
|
if (isimm) {
|
||||||
*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
|
*patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
|
||||||
*patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
|
} else {
|
||||||
*patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);
|
if (isneg)
|
||||||
*patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);
|
*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
|
||||||
*patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
|
*patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);
|
||||||
*patch++ = BPF_ALU64_IMM(BPF_ARSH, BPF_REG_AX, 63);
|
*patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);
|
||||||
*patch++ = BPF_ALU64_REG(BPF_AND, BPF_REG_AX, off_reg);
|
*patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);
|
||||||
|
*patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
|
||||||
|
*patch++ = BPF_ALU64_IMM(BPF_ARSH, BPF_REG_AX, 63);
|
||||||
|
*patch++ = BPF_ALU64_REG(BPF_AND, BPF_REG_AX, off_reg);
|
||||||
|
}
|
||||||
if (!issrc)
|
if (!issrc)
|
||||||
*patch++ = BPF_MOV64_REG(insn->dst_reg, insn->src_reg);
|
*patch++ = BPF_MOV64_REG(insn->dst_reg, insn->src_reg);
|
||||||
insn->src_reg = BPF_REG_AX;
|
insn->src_reg = BPF_REG_AX;
|
||||||
|
@ -6107,7 +6217,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
|
||||||
insn->code = insn->code == code_add ?
|
insn->code = insn->code == code_add ?
|
||||||
code_sub : code_add;
|
code_sub : code_add;
|
||||||
*patch++ = *insn;
|
*patch++ = *insn;
|
||||||
if (issrc && isneg)
|
if (issrc && isneg && !isimm)
|
||||||
*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
|
*patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);
|
||||||
cnt = patch - insn_buf;
|
cnt = patch - insn_buf;
|
||||||
|
|
||||||
|
|
10
mm/hugetlb.c
10
mm/hugetlb.c
|
@ -3862,7 +3862,7 @@ retry:
|
||||||
* handling userfault. Reacquire after handling
|
* handling userfault. Reacquire after handling
|
||||||
* fault to make calling code simpler.
|
* fault to make calling code simpler.
|
||||||
*/
|
*/
|
||||||
hash = hugetlb_fault_mutex_hash(h, mapping, idx, haddr);
|
hash = hugetlb_fault_mutex_hash(h, mapping, idx);
|
||||||
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
|
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
|
||||||
ret = handle_userfault(&vmf, VM_UFFD_MISSING);
|
ret = handle_userfault(&vmf, VM_UFFD_MISSING);
|
||||||
mutex_lock(&hugetlb_fault_mutex_table[hash]);
|
mutex_lock(&hugetlb_fault_mutex_table[hash]);
|
||||||
|
@ -3971,7 +3971,7 @@ backout_unlocked:
|
||||||
|
|
||||||
#ifdef CONFIG_SMP
|
#ifdef CONFIG_SMP
|
||||||
u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
|
u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
|
||||||
pgoff_t idx, unsigned long address)
|
pgoff_t idx)
|
||||||
{
|
{
|
||||||
unsigned long key[2];
|
unsigned long key[2];
|
||||||
u32 hash;
|
u32 hash;
|
||||||
|
@ -3979,7 +3979,7 @@ u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
|
||||||
key[0] = (unsigned long) mapping;
|
key[0] = (unsigned long) mapping;
|
||||||
key[1] = idx;
|
key[1] = idx;
|
||||||
|
|
||||||
hash = jhash2((u32 *)&key, sizeof(key)/sizeof(u32), 0);
|
hash = jhash2((u32 *)&key, sizeof(key)/(sizeof(u32)), 0);
|
||||||
|
|
||||||
return hash & (num_fault_mutexes - 1);
|
return hash & (num_fault_mutexes - 1);
|
||||||
}
|
}
|
||||||
|
@ -3989,7 +3989,7 @@ u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
|
||||||
* return 0 and avoid the hashing overhead.
|
* return 0 and avoid the hashing overhead.
|
||||||
*/
|
*/
|
||||||
u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
|
u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
|
||||||
pgoff_t idx, unsigned long address)
|
pgoff_t idx)
|
||||||
{
|
{
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -4033,7 +4033,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
|
||||||
* get spurious allocation failures if two CPUs race to instantiate
|
* get spurious allocation failures if two CPUs race to instantiate
|
||||||
* the same page in the page cache.
|
* the same page in the page cache.
|
||||||
*/
|
*/
|
||||||
hash = hugetlb_fault_mutex_hash(h, mapping, idx, haddr);
|
hash = hugetlb_fault_mutex_hash(h, mapping, idx);
|
||||||
mutex_lock(&hugetlb_fault_mutex_table[hash]);
|
mutex_lock(&hugetlb_fault_mutex_table[hash]);
|
||||||
|
|
||||||
entry = huge_ptep_get(ptep);
|
entry = huge_ptep_get(ptep);
|
||||||
|
|
|
@ -271,7 +271,7 @@ retry:
|
||||||
*/
|
*/
|
||||||
idx = linear_page_index(dst_vma, dst_addr);
|
idx = linear_page_index(dst_vma, dst_addr);
|
||||||
mapping = dst_vma->vm_file->f_mapping;
|
mapping = dst_vma->vm_file->f_mapping;
|
||||||
hash = hugetlb_fault_mutex_hash(h, mapping, idx, dst_addr);
|
hash = hugetlb_fault_mutex_hash(h, mapping, idx);
|
||||||
mutex_lock(&hugetlb_fault_mutex_table[hash]);
|
mutex_lock(&hugetlb_fault_mutex_table[hash]);
|
||||||
|
|
||||||
err = -ENOMEM;
|
err = -ENOMEM;
|
||||||
|
|
|
@ -1394,6 +1394,9 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
|
||||||
list_for_each(curr, &area->free_list[mtype])
|
list_for_each(curr, &area->free_list[mtype])
|
||||||
freecount++;
|
freecount++;
|
||||||
seq_printf(m, "%6lu ", freecount);
|
seq_printf(m, "%6lu ", freecount);
|
||||||
|
spin_unlock_irq(&zone->lock);
|
||||||
|
cond_resched();
|
||||||
|
spin_lock_irq(&zone->lock);
|
||||||
}
|
}
|
||||||
seq_putc(m, '\n');
|
seq_putc(m, '\n');
|
||||||
}
|
}
|
||||||
|
|
|
@ -391,6 +391,11 @@ int cmtp_add_connection(struct cmtp_connadd_req *req, struct socket *sock)
|
||||||
if (!(session->flags & BIT(CMTP_LOOPBACK))) {
|
if (!(session->flags & BIT(CMTP_LOOPBACK))) {
|
||||||
err = cmtp_attach_device(session);
|
err = cmtp_attach_device(session);
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
|
/* Caller will call fput in case of failure, and so
|
||||||
|
* will cmtp_session kthread.
|
||||||
|
*/
|
||||||
|
get_file(session->sock->file);
|
||||||
|
|
||||||
atomic_inc(&session->terminate);
|
atomic_inc(&session->terminate);
|
||||||
wake_up_interruptible(sk_sleep(session->sock->sk));
|
wake_up_interruptible(sk_sleep(session->sock->sk));
|
||||||
up_write(&cmtp_session_sem);
|
up_write(&cmtp_session_sem);
|
||||||
|
|
|
@ -3020,6 +3020,7 @@ static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room,
|
||||||
__skb_push(skb, head_room);
|
__skb_push(skb, head_room);
|
||||||
memset(skb->data, 0, head_room);
|
memset(skb->data, 0, head_room);
|
||||||
skb_reset_mac_header(skb);
|
skb_reset_mac_header(skb);
|
||||||
|
skb_reset_mac_len(skb);
|
||||||
}
|
}
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
@ -87,8 +87,7 @@ static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset,
|
||||||
struct dsa_switch *ds = cpu_dp->ds;
|
struct dsa_switch *ds = cpu_dp->ds;
|
||||||
int port = cpu_dp->index;
|
int port = cpu_dp->index;
|
||||||
int len = ETH_GSTRING_LEN;
|
int len = ETH_GSTRING_LEN;
|
||||||
int mcount = 0, count;
|
int mcount = 0, count, i;
|
||||||
unsigned int i;
|
|
||||||
uint8_t pfx[4];
|
uint8_t pfx[4];
|
||||||
uint8_t *ndata;
|
uint8_t *ndata;
|
||||||
|
|
||||||
|
@ -118,6 +117,8 @@ static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset,
|
||||||
*/
|
*/
|
||||||
ds->ops->get_strings(ds, port, stringset, ndata);
|
ds->ops->get_strings(ds, port, stringset, ndata);
|
||||||
count = ds->ops->get_sset_count(ds, port, stringset);
|
count = ds->ops->get_sset_count(ds, port, stringset);
|
||||||
|
if (count < 0)
|
||||||
|
return;
|
||||||
for (i = 0; i < count; i++) {
|
for (i = 0; i < count; i++) {
|
||||||
memmove(ndata + (i * len + sizeof(pfx)),
|
memmove(ndata + (i * len + sizeof(pfx)),
|
||||||
ndata + i * len, len - sizeof(pfx));
|
ndata + i * len, len - sizeof(pfx));
|
||||||
|
|
|
@ -598,13 +598,15 @@ static int dsa_slave_get_sset_count(struct net_device *dev, int sset)
|
||||||
struct dsa_switch *ds = dp->ds;
|
struct dsa_switch *ds = dp->ds;
|
||||||
|
|
||||||
if (sset == ETH_SS_STATS) {
|
if (sset == ETH_SS_STATS) {
|
||||||
int count;
|
int count = 0;
|
||||||
|
|
||||||
count = 4;
|
if (ds->ops->get_sset_count) {
|
||||||
if (ds->ops->get_sset_count)
|
count = ds->ops->get_sset_count(ds, dp->index, sset);
|
||||||
count += ds->ops->get_sset_count(ds, dp->index, sset);
|
if (count < 0)
|
||||||
|
return count;
|
||||||
|
}
|
||||||
|
|
||||||
return count;
|
return count + 4;
|
||||||
}
|
}
|
||||||
|
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
|
|
|
@ -1606,10 +1606,7 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
|
||||||
IPV6_TLV_PADN, 0 };
|
IPV6_TLV_PADN, 0 };
|
||||||
|
|
||||||
/* we assume size > sizeof(ra) here */
|
/* we assume size > sizeof(ra) here */
|
||||||
/* limit our allocations to order-0 page */
|
|
||||||
size = min_t(int, size, SKB_MAX_ORDER(0, 0));
|
|
||||||
skb = sock_alloc_send_skb(sk, size, 1, &err);
|
skb = sock_alloc_send_skb(sk, size, 1, &err);
|
||||||
|
|
||||||
if (!skb)
|
if (!skb)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
|
|
|
@ -347,7 +347,7 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
|
||||||
hdr = ipv6_hdr(skb);
|
hdr = ipv6_hdr(skb);
|
||||||
fhdr = (struct frag_hdr *)skb_transport_header(skb);
|
fhdr = (struct frag_hdr *)skb_transport_header(skb);
|
||||||
|
|
||||||
if (!(fhdr->frag_off & htons(0xFFF9))) {
|
if (!(fhdr->frag_off & htons(IP6_OFFSET | IP6_MF))) {
|
||||||
/* It is not a fragmented frame */
|
/* It is not a fragmented frame */
|
||||||
skb->transport_header += sizeof(struct frag_hdr);
|
skb->transport_header += sizeof(struct frag_hdr);
|
||||||
__IP6_INC_STATS(net,
|
__IP6_INC_STATS(net,
|
||||||
|
@ -355,6 +355,8 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
|
||||||
|
|
||||||
IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb);
|
IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb);
|
||||||
IP6CB(skb)->flags |= IP6SKB_FRAGMENTED;
|
IP6CB(skb)->flags |= IP6SKB_FRAGMENTED;
|
||||||
|
IP6CB(skb)->frag_max_size = ntohs(hdr->payload_len) +
|
||||||
|
sizeof(struct ipv6hdr);
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -53,12 +53,6 @@ struct ieee80211_local;
|
||||||
#define IEEE80211_ENCRYPT_HEADROOM 8
|
#define IEEE80211_ENCRYPT_HEADROOM 8
|
||||||
#define IEEE80211_ENCRYPT_TAILROOM 18
|
#define IEEE80211_ENCRYPT_TAILROOM 18
|
||||||
|
|
||||||
/* IEEE 802.11 (Ch. 9.5 Defragmentation) requires support for concurrent
|
|
||||||
* reception of at least three fragmented frames. This limit can be increased
|
|
||||||
* by changing this define, at the cost of slower frame reassembly and
|
|
||||||
* increased memory use (about 2 kB of RAM per entry). */
|
|
||||||
#define IEEE80211_FRAGMENT_MAX 4
|
|
||||||
|
|
||||||
/* power level hasn't been configured (or set to automatic) */
|
/* power level hasn't been configured (or set to automatic) */
|
||||||
#define IEEE80211_UNSET_POWER_LEVEL INT_MIN
|
#define IEEE80211_UNSET_POWER_LEVEL INT_MIN
|
||||||
|
|
||||||
|
@ -91,18 +85,6 @@ extern const u8 ieee80211_ac_to_qos_mask[IEEE80211_NUM_ACS];
|
||||||
|
|
||||||
#define IEEE80211_MAX_NAN_INSTANCE_ID 255
|
#define IEEE80211_MAX_NAN_INSTANCE_ID 255
|
||||||
|
|
||||||
struct ieee80211_fragment_entry {
|
|
||||||
struct sk_buff_head skb_list;
|
|
||||||
unsigned long first_frag_time;
|
|
||||||
u16 seq;
|
|
||||||
u16 extra_len;
|
|
||||||
u16 last_frag;
|
|
||||||
u8 rx_queue;
|
|
||||||
bool check_sequential_pn; /* needed for CCMP/GCMP */
|
|
||||||
u8 last_pn[6]; /* PN of the last fragment if CCMP was used */
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
struct ieee80211_bss {
|
struct ieee80211_bss {
|
||||||
u32 device_ts_beacon, device_ts_presp;
|
u32 device_ts_beacon, device_ts_presp;
|
||||||
|
|
||||||
|
@ -243,8 +225,15 @@ struct ieee80211_rx_data {
|
||||||
*/
|
*/
|
||||||
int security_idx;
|
int security_idx;
|
||||||
|
|
||||||
u32 tkip_iv32;
|
union {
|
||||||
u16 tkip_iv16;
|
struct {
|
||||||
|
u32 iv32;
|
||||||
|
u16 iv16;
|
||||||
|
} tkip;
|
||||||
|
struct {
|
||||||
|
u8 pn[IEEE80211_CCMP_PN_LEN];
|
||||||
|
} ccm_gcm;
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
struct ieee80211_csa_settings {
|
struct ieee80211_csa_settings {
|
||||||
|
@ -884,9 +873,7 @@ struct ieee80211_sub_if_data {
|
||||||
|
|
||||||
char name[IFNAMSIZ];
|
char name[IFNAMSIZ];
|
||||||
|
|
||||||
/* Fragment table for host-based reassembly */
|
struct ieee80211_fragment_cache frags;
|
||||||
struct ieee80211_fragment_entry fragments[IEEE80211_FRAGMENT_MAX];
|
|
||||||
unsigned int fragment_next;
|
|
||||||
|
|
||||||
/* TID bitmap for NoAck policy */
|
/* TID bitmap for NoAck policy */
|
||||||
u16 noack_map;
|
u16 noack_map;
|
||||||
|
@ -2204,4 +2191,7 @@ extern const struct ethtool_ops ieee80211_ethtool_ops;
|
||||||
#define debug_noinline
|
#define debug_noinline
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
void ieee80211_init_frag_cache(struct ieee80211_fragment_cache *cache);
|
||||||
|
void ieee80211_destroy_frag_cache(struct ieee80211_fragment_cache *cache);
|
||||||
|
|
||||||
#endif /* IEEE80211_I_H */
|
#endif /* IEEE80211_I_H */
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
* Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
|
* Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
|
||||||
* Copyright 2013-2014 Intel Mobile Communications GmbH
|
* Copyright 2013-2014 Intel Mobile Communications GmbH
|
||||||
* Copyright (c) 2016 Intel Deutschland GmbH
|
* Copyright (c) 2016 Intel Deutschland GmbH
|
||||||
* Copyright (C) 2018 Intel Corporation
|
* Copyright (C) 2018-2021 Intel Corporation
|
||||||
*
|
*
|
||||||
* This program is free software; you can redistribute it and/or modify
|
* This program is free software; you can redistribute it and/or modify
|
||||||
* it under the terms of the GNU General Public License version 2 as
|
* it under the terms of the GNU General Public License version 2 as
|
||||||
|
@ -1111,16 +1111,12 @@ static void ieee80211_set_multicast_list(struct net_device *dev)
|
||||||
*/
|
*/
|
||||||
static void ieee80211_teardown_sdata(struct ieee80211_sub_if_data *sdata)
|
static void ieee80211_teardown_sdata(struct ieee80211_sub_if_data *sdata)
|
||||||
{
|
{
|
||||||
int i;
|
|
||||||
|
|
||||||
/* free extra data */
|
/* free extra data */
|
||||||
ieee80211_free_keys(sdata, false);
|
ieee80211_free_keys(sdata, false);
|
||||||
|
|
||||||
ieee80211_debugfs_remove_netdev(sdata);
|
ieee80211_debugfs_remove_netdev(sdata);
|
||||||
|
|
||||||
for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++)
|
ieee80211_destroy_frag_cache(&sdata->frags);
|
||||||
__skb_queue_purge(&sdata->fragments[i].skb_list);
|
|
||||||
sdata->fragment_next = 0;
|
|
||||||
|
|
||||||
if (ieee80211_vif_is_mesh(&sdata->vif))
|
if (ieee80211_vif_is_mesh(&sdata->vif))
|
||||||
ieee80211_mesh_teardown_sdata(sdata);
|
ieee80211_mesh_teardown_sdata(sdata);
|
||||||
|
@ -1832,8 +1828,7 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
|
||||||
sdata->wdev.wiphy = local->hw.wiphy;
|
sdata->wdev.wiphy = local->hw.wiphy;
|
||||||
sdata->local = local;
|
sdata->local = local;
|
||||||
|
|
||||||
for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++)
|
ieee80211_init_frag_cache(&sdata->frags);
|
||||||
skb_queue_head_init(&sdata->fragments[i].skb_list);
|
|
||||||
|
|
||||||
INIT_LIST_HEAD(&sdata->key_list);
|
INIT_LIST_HEAD(&sdata->key_list);
|
||||||
|
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue