Merge tag 'ASB-2024-01-05_4.19-stable' of https://android.googlesource.com/kernel/common into android13-4.19-kona
https://source.android.com/docs/security/bulletin/2024-01-01 * tag 'ASB-2024-01-05_4.19-stable' of https://android.googlesource.com/kernel/common: Linux 4.19.304 block: Don't invalidate pagecache for invalid falloc modes dm-integrity: don't modify bio's immutable bio_vec in integrity_metadata() smb: client: fix OOB in smbCalcSize() usb: fotg210-hcd: delete an incorrect bounds test usb: musb: fix MUSB_QUIRK_B_DISCONNECT_99 handling x86/alternatives: Sync core before enabling interrupts net: rfkill: gpio: set GPIO direction net: 9p: avoid freeing uninit memory in p9pdu_vreadf Bluetooth: hci_event: Fix not checking if HCI_OP_INQUIRY has been sent USB: serial: option: add Quectel RM500Q R13 firmware support USB: serial: option: add Foxconn T99W265 with new baseline USB: serial: option: add Quectel EG912Y module support USB: serial: ftdi_sio: update Actisense PIDs constant names wifi: cfg80211: fix certs build to not depend on file order wifi: cfg80211: Add my certificate iio: common: ms_sensors: ms_sensors_i2c: fix humidity conversion time table scsi: bnx2fc: Fix skb double free in bnx2fc_rcv() scsi: bnx2fc: Remove set but not used variable 'oxid' Input: ipaq-micro-keys - add error handling for devm_kmemdup iio: imu: inv_mpu6050: fix an error code problem in inv_mpu6050_read_raw btrfs: do not allow non subvolume root targets for snapshot smb: client: fix NULL deref in asn1_ber_decoder() pinctrl: at91-pio4: use dedicated lock class for IRQ net: check dev->gso_max_size in gso_features_check() net: warn if gso_type isn't set for a GSO SKB afs: Fix the dynamic root's d_delete to always delete unused dentries net: check vlan filter feature in vlan_vids_add_by_dev() and vlan_vids_del_by_dev() net/rose: fix races in rose_kill_by_device() ethernet: atheros: fix a memleak in atl1e_setup_ring_resources net: sched: ife: fix potential use-after-free net/mlx5: Fix fw tracer first block check net/mlx5: improve some comments wifi: mac80211: mesh_plink: fix matches_local logic s390/vx: fix save/restore of fpu kernel context reset: Fix crash when freeing non-existent optional resets ARM: OMAP2+: Fix null pointer dereference and memory leak in omap_soc_device_init ksmbd: fix wrong name of SMB2_CREATE_ALLOCATION_SIZE ALSA: hda/realtek: Enable headset on Lenovo M90 Gen5 ALSA: hda/realtek: Enable headset onLenovo M70/M90 ALSA: hda/realtek: Add quirk for Lenovo TianYi510Pro-14IOB arm64: dts: mediatek: mt8173-evb: Fix regulator-fixed node names Revert "cred: switch to using atomic_long_t" Linux 4.19.303 powerpc/ftrace: Fix stack teardown in ftrace_no_trace powerpc/ftrace: Create a dummy stackframe to fix stack unwind mmc: block: Be sure to wait while busy in CQE error recovery ring-buffer: Fix memory leak of free page team: Fix use-after-free when an option instance allocation fails arm64: mm: Always make sw-dirty PTEs hw-dirty in pte_modify ext4: prevent the normalized size from exceeding EXT_MAX_BLOCKS perf: Fix perf_event_validate_size() lockdep splat HID: hid-asus: add const to read-only outgoing usb buffer net: usb: qmi_wwan: claim interface 4 for ZTE MF290 asm-generic: qspinlock: fix queued_spin_value_unlocked() implementation HID: multitouch: Add quirk for HONOR GLO-GXXX touchpad HID: hid-asus: reset the backlight brightness level on resume HID: add ALWAYS_POLL quirk for Apple kb platform/x86: intel_telemetry: Fix kernel doc descriptions bcache: avoid NULL checking to c->root in run_cache_set() bcache: add code comments for bch_btree_node_get() and __bch_btree_node_alloc() bcache: avoid oversize memory allocation by small stripe_size blk-throttle: fix lockdep warning of "cgroup_mutex or RCU read lock required!" cred: switch to using atomic_long_t Revert "PCI: acpiphp: Reassign resources on bridge if necessary" appletalk: Fix Use-After-Free in atalk_ioctl net: stmmac: Handle disabled MDIO busses from devicetree vsock/virtio: Fix unsigned integer wrap around in virtio_transport_has_space() sign-file: Fix incorrect return values check net: Remove acked SYN flag from packet in the transmit queue correctly qed: Fix a potential use-after-free in qed_cxt_tables_alloc net/rose: Fix Use-After-Free in rose_ioctl atm: Fix Use-After-Free in do_vcc_ioctl atm: solos-pci: Fix potential deadlock on &tx_queue_lock atm: solos-pci: Fix potential deadlock on &cli_queue_lock qca_spi: Fix reset behavior qca_debug: Fix ethtool -G iface tx behavior qca_debug: Prevent crash on TX ring changes Revert "psample: Require 'CAP_NET_ADMIN' when joining "packets" group" Revert "genetlink: add CAP_NET_ADMIN test for multicast bind" Revert "drop_monitor: Require 'CAP_SYS_ADMIN' when joining "events" group" Revert "perf/core: Add a new read format to get a number of lost samples" Revert "perf: Fix perf_event_validate_size()" Revert "hrtimers: Push pending hrtimers away from outgoing CPU earlier" ANDROID: Snapshot Mainline's version of checkpatch.pl Linux 4.19.302 devcoredump: Send uevent once devcd is ready devcoredump : Serialize devcd_del work IB/isert: Fix unaligned immediate-data handling tools headers UAPI: Sync linux/perf_event.h with the kernel sources drop_monitor: Require 'CAP_SYS_ADMIN' when joining "events" group psample: Require 'CAP_NET_ADMIN' when joining "packets" group genetlink: add CAP_NET_ADMIN test for multicast bind netlink: don't call ->netlink_bind with table lock held nilfs2: fix missing error check for sb_set_blocksize call KVM: s390/mm: Properly reset no-dat x86/CPU/AMD: Check vendor in the AMD microcode callback serial: 8250_omap: Add earlycon support for the AM654 UART controller serial: sc16is7xx: address RX timeout interrupt errata usb: typec: class: fix typec_altmode_put_partner to put plugs parport: Add support for Brainboxes IX/UC/PX parallel cards usb: gadget: f_hid: fix report descriptor allocation gpiolib: sysfs: Fix error handling on failed export perf: Fix perf_event_validate_size() perf/core: Add a new read format to get a number of lost samples tracing: Fix a possible race when disabling buffered events tracing: Fix incomplete locking when disabling buffered events tracing: Always update snapshot buffer size nilfs2: prevent WARNING in nilfs_sufile_set_segment_usage() packet: Move reference count in packet_sock to atomic_long_t ALSA: pcm: fix out-of-bounds in snd_pcm_state_names ARM: dts: imx7: Declare timers compatible with fsl,imx6dl-gpt ARM: dts: imx: make gpt node name generic ARM: imx: Check return value of devm_kasprintf in imx_mmdc_perf_init scsi: be2iscsi: Fix a memleak in beiscsi_init_wrb_handle() tracing: Fix a warning when allocating buffered events fails hwmon: (acpi_power_meter) Fix 4.29 MW bug RDMA/bnxt_re: Correct module description string tcp: do not accept ACK of bytes we never sent netfilter: xt_owner: Fix for unsafe access of sk->sk_socket netfilter: xt_owner: Add supplementary groups option net: hns: fix fake link up on xge port ipv4: ip_gre: Avoid skb_pull() failure in ipgre_xmit() arcnet: restoring support for multiple Sohard Arcnet cards net: arcnet: com20020 fix error handling net: arcnet: Fix RESET flag handling hv_netvsc: rndis_filter needs to select NLS ipv6: fix potential NULL deref in fib6_add() drm/amdgpu: correct chunk_ptr to a pointer to chunk. kconfig: fix memory leak from range properties tg3: Increment tx_dropped in tg3_tso_bug() tg3: Move the [rt]x_dropped counters to tg3_napi netfilter: ipset: fix race condition between swap/destroy and kernel side add/del/test hrtimers: Push pending hrtimers away from outgoing CPU earlier media: davinci: vpif_capture: fix potential double free spi: imx: mx51-ecspi: Move some initialisation to prepare_message hook. spi: imx: correct wml as the last sg length spi: imx: move wml setting to later than setup_transfer spi: imx: add a device specific prepare_message callback Linux 4.19.301 mmc: block: Retry commands in CQE error recovery mmc: core: convert comma to semicolon mmc: cqhci: Fix task clearing in CQE error recovery mmc: cqhci: Warn of halt or task clear failure mmc: cqhci: Increase recovery halt timeout cpufreq: imx6q: Don't disable 792 Mhz OPP unnecessarily cpufreq: imx6q: don't warn for disabling a non-existing frequency ima: detect changes to the backing overlay file ovl: skip overlayfs superblocks at global sync ima: annotate iint mutex to avoid lockdep false positive warnings fbdev: stifb: Make the STI next font pointer a 32-bit signed offset mtd: cfi_cmdset_0001: Byte swap OTP info mtd: cfi_cmdset_0001: Support the absence of protection registers s390/cmma: fix detection of DAT pages s390/mm: fix phys vs virt confusion in mark_kernel_pXd() functions family smb3: fix touch -h of symlink net: ravb: Start TX queues after HW initialization succeeded ravb: Fix races between ravb_tx_timeout_work() and net related ops ipv4: igmp: fix refcnt uaf issue when receiving igmp query packet Input: xpad - add HyperX Clutch Gladiate Support btrfs: send: ensure send_fd is writable btrfs: fix off-by-one when checking chunk map includes logical address powerpc: Don't clobber f0/vs0 during fp|altivec register save bcache: revert replacing IS_ERR_OR_NULL with IS_ERR dm verity: don't perform FEC for failed readahead IO dm-verity: align struct dm_verity_fec_io properly ALSA: hda/realtek: Headset Mic VREF to 100% ALSA: hda: Disable power-save on KONTRON SinglePC mmc: block: Do not lose cache flush during CQE error recovery firewire: core: fix possible memory leak in create_units() pinctrl: avoid reload of p state in list iteration USB: dwc3: qcom: fix wakeup after probe deferral usb: dwc3: set the dma max_seg_size USB: dwc2: write HCINT with INTMASK applied USB: serial: option: don't claim interface 4 for ZTE MF290 USB: serial: option: fix FM101R-GL defines USB: serial: option: add Fibocom L7xx modules bcache: prevent potential division by zero error bcache: check return value from btree_node_alloc_replacement() dm-delay: fix a race between delay_presuspend and delay_bio hv_netvsc: Mark VF as slave before exposing it to user-mode hv_netvsc: Fix race of register_netdevice_notifier and VF register USB: serial: option: add Luat Air72*U series products s390/dasd: protect device queue against concurrent access bcache: replace a mistaken IS_ERR() by IS_ERR_OR_NULL() in btree_gc_coalesce() mtd: rawnand: brcmnand: Fix ecc chunk calculation for erased page bitfips KVM: arm64: limit PMU version to PMUv3 for ARMv8.1 arm64: cpufeature: Extract capped perfmon fields MIPS: KVM: Fix a build warning about variable set but not used net: axienet: Fix check for partial TX checksum amd-xgbe: propagate the correct speed and duplex status amd-xgbe: handle the corner-case during tx completion amd-xgbe: handle corner-case during sfp hotplug arm/xen: fix xen_vcpu_info allocation alignment net: usb: ax88179_178a: fix failed operations during ax88179_reset ipv4: Correct/silence an endian warning in __ip_do_redirect HID: fix HID device resource race between HID core and debugging support HID: core: store the unique system identifier in hid_device drm/rockchip: vop: Fix color for RGB888/BGR888 format on VOP full ata: pata_isapnp: Add missing error check for devm_ioport_map() drm/panel: simple: Fix Innolux G101ICE-L01 timings RDMA/irdma: Prevent zero-length STAG registration driver core: Release all resources during unbind before updating device links Conflicts: drivers/mmc/host/cqhci.c drivers/net/usb/ax88179_178a.c drivers/usb/dwc3/core.c scripts/checkpatch.pl Change-Id: I571c71df4f4c1c612d4101c9b9c2b901b4408103
This commit is contained in:
commit
bfc560ed37
171 changed files with 3063 additions and 1210 deletions
2
Makefile
2
Makefile
|
@ -1,7 +1,7 @@
|
||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 4
|
VERSION = 4
|
||||||
PATCHLEVEL = 19
|
PATCHLEVEL = 19
|
||||||
SUBLEVEL = 300
|
SUBLEVEL = 304
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = "People's Front"
|
NAME = "People's Front"
|
||||||
|
|
||||||
|
|
|
@ -565,7 +565,7 @@
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
gpt: gpt@2098000 {
|
gpt: timer@2098000 {
|
||||||
compatible = "fsl,imx6q-gpt", "fsl,imx31-gpt";
|
compatible = "fsl,imx6q-gpt", "fsl,imx31-gpt";
|
||||||
reg = <0x02098000 0x4000>;
|
reg = <0x02098000 0x4000>;
|
||||||
interrupts = <0 55 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <0 55 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
|
|
|
@ -374,7 +374,7 @@
|
||||||
clock-names = "ipg", "per";
|
clock-names = "ipg", "per";
|
||||||
};
|
};
|
||||||
|
|
||||||
gpt: gpt@2098000 {
|
gpt: timer@2098000 {
|
||||||
compatible = "fsl,imx6sl-gpt";
|
compatible = "fsl,imx6sl-gpt";
|
||||||
reg = <0x02098000 0x4000>;
|
reg = <0x02098000 0x4000>;
|
||||||
interrupts = <0 55 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <0 55 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
|
|
|
@ -468,7 +468,7 @@
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
gpt: gpt@2098000 {
|
gpt: timer@2098000 {
|
||||||
compatible = "fsl,imx6sx-gpt", "fsl,imx6dl-gpt";
|
compatible = "fsl,imx6sx-gpt", "fsl,imx6dl-gpt";
|
||||||
reg = <0x02098000 0x4000>;
|
reg = <0x02098000 0x4000>;
|
||||||
interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
|
|
|
@ -423,7 +423,7 @@
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
gpt1: gpt@2098000 {
|
gpt1: timer@2098000 {
|
||||||
compatible = "fsl,imx6ul-gpt", "fsl,imx6sx-gpt";
|
compatible = "fsl,imx6ul-gpt", "fsl,imx6sx-gpt";
|
||||||
reg = <0x02098000 0x4000>;
|
reg = <0x02098000 0x4000>;
|
||||||
interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
|
@ -696,7 +696,7 @@
|
||||||
reg = <0x020e4000 0x4000>;
|
reg = <0x020e4000 0x4000>;
|
||||||
};
|
};
|
||||||
|
|
||||||
gpt2: gpt@20e8000 {
|
gpt2: timer@20e8000 {
|
||||||
compatible = "fsl,imx6ul-gpt", "fsl,imx6sx-gpt";
|
compatible = "fsl,imx6ul-gpt", "fsl,imx6sx-gpt";
|
||||||
reg = <0x020e8000 0x4000>;
|
reg = <0x020e8000 0x4000>;
|
||||||
interrupts = <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
|
|
|
@ -439,8 +439,8 @@
|
||||||
fsl,input-sel = <&iomuxc>;
|
fsl,input-sel = <&iomuxc>;
|
||||||
};
|
};
|
||||||
|
|
||||||
gpt1: gpt@302d0000 {
|
gpt1: timer@302d0000 {
|
||||||
compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
|
compatible = "fsl,imx7d-gpt", "fsl,imx6dl-gpt";
|
||||||
reg = <0x302d0000 0x10000>;
|
reg = <0x302d0000 0x10000>;
|
||||||
interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
clocks = <&clks IMX7D_GPT1_ROOT_CLK>,
|
clocks = <&clks IMX7D_GPT1_ROOT_CLK>,
|
||||||
|
@ -448,8 +448,8 @@
|
||||||
clock-names = "ipg", "per";
|
clock-names = "ipg", "per";
|
||||||
};
|
};
|
||||||
|
|
||||||
gpt2: gpt@302e0000 {
|
gpt2: timer@302e0000 {
|
||||||
compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
|
compatible = "fsl,imx7d-gpt", "fsl,imx6dl-gpt";
|
||||||
reg = <0x302e0000 0x10000>;
|
reg = <0x302e0000 0x10000>;
|
||||||
interrupts = <GIC_SPI 54 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 54 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
clocks = <&clks IMX7D_GPT2_ROOT_CLK>,
|
clocks = <&clks IMX7D_GPT2_ROOT_CLK>,
|
||||||
|
@ -458,8 +458,8 @@
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
gpt3: gpt@302f0000 {
|
gpt3: timer@302f0000 {
|
||||||
compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
|
compatible = "fsl,imx7d-gpt", "fsl,imx6dl-gpt";
|
||||||
reg = <0x302f0000 0x10000>;
|
reg = <0x302f0000 0x10000>;
|
||||||
interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
clocks = <&clks IMX7D_GPT3_ROOT_CLK>,
|
clocks = <&clks IMX7D_GPT3_ROOT_CLK>,
|
||||||
|
@ -468,8 +468,8 @@
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
gpt4: gpt@30300000 {
|
gpt4: timer@30300000 {
|
||||||
compatible = "fsl,imx7d-gpt", "fsl,imx6sx-gpt";
|
compatible = "fsl,imx7d-gpt", "fsl,imx6dl-gpt";
|
||||||
reg = <0x30300000 0x10000>;
|
reg = <0x30300000 0x10000>;
|
||||||
interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
clocks = <&clks IMX7D_GPT4_ROOT_CLK>,
|
clocks = <&clks IMX7D_GPT4_ROOT_CLK>,
|
||||||
|
|
|
@ -513,6 +513,10 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
|
||||||
|
|
||||||
name = devm_kasprintf(&pdev->dev,
|
name = devm_kasprintf(&pdev->dev,
|
||||||
GFP_KERNEL, "mmdc%d", ret);
|
GFP_KERNEL, "mmdc%d", ret);
|
||||||
|
if (!name) {
|
||||||
|
ret = -ENOMEM;
|
||||||
|
goto pmu_release_id;
|
||||||
|
}
|
||||||
|
|
||||||
pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
|
pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
|
||||||
pmu_mmdc->devtype_data = (struct fsl_mmdc_devtype_data *)of_id->data;
|
pmu_mmdc->devtype_data = (struct fsl_mmdc_devtype_data *)of_id->data;
|
||||||
|
@ -535,9 +539,10 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
|
||||||
|
|
||||||
pmu_register_err:
|
pmu_register_err:
|
||||||
pr_warn("MMDC Perf PMU failed (%d), disabled\n", ret);
|
pr_warn("MMDC Perf PMU failed (%d), disabled\n", ret);
|
||||||
ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
|
|
||||||
cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
|
cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
|
||||||
hrtimer_cancel(&pmu_mmdc->hrtimer);
|
hrtimer_cancel(&pmu_mmdc->hrtimer);
|
||||||
|
pmu_release_id:
|
||||||
|
ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
|
||||||
pmu_free:
|
pmu_free:
|
||||||
kfree(pmu_mmdc);
|
kfree(pmu_mmdc);
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
@ -800,10 +800,15 @@ void __init omap_soc_device_init(void)
|
||||||
|
|
||||||
soc_dev_attr->machine = soc_name;
|
soc_dev_attr->machine = soc_name;
|
||||||
soc_dev_attr->family = omap_get_family();
|
soc_dev_attr->family = omap_get_family();
|
||||||
|
if (!soc_dev_attr->family) {
|
||||||
|
kfree(soc_dev_attr);
|
||||||
|
return;
|
||||||
|
}
|
||||||
soc_dev_attr->revision = soc_rev;
|
soc_dev_attr->revision = soc_rev;
|
||||||
|
|
||||||
soc_dev = soc_device_register(soc_dev_attr);
|
soc_dev = soc_device_register(soc_dev_attr);
|
||||||
if (IS_ERR(soc_dev)) {
|
if (IS_ERR(soc_dev)) {
|
||||||
|
kfree(soc_dev_attr->family);
|
||||||
kfree(soc_dev_attr);
|
kfree(soc_dev_attr);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
|
@ -388,7 +388,8 @@ static int __init xen_guest_init(void)
|
||||||
* for secondary CPUs as they are brought up.
|
* for secondary CPUs as they are brought up.
|
||||||
* For uniformity we use VCPUOP_register_vcpu_info even on cpu0.
|
* For uniformity we use VCPUOP_register_vcpu_info even on cpu0.
|
||||||
*/
|
*/
|
||||||
xen_vcpu_info = alloc_percpu(struct vcpu_info);
|
xen_vcpu_info = __alloc_percpu(sizeof(struct vcpu_info),
|
||||||
|
1 << fls(sizeof(struct vcpu_info) - 1));
|
||||||
if (xen_vcpu_info == NULL)
|
if (xen_vcpu_info == NULL)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
|
|
@ -51,7 +51,7 @@
|
||||||
id-gpio = <&pio 16 GPIO_ACTIVE_HIGH>;
|
id-gpio = <&pio 16 GPIO_ACTIVE_HIGH>;
|
||||||
};
|
};
|
||||||
|
|
||||||
usb_p1_vbus: regulator@0 {
|
usb_p1_vbus: regulator-usb-p1 {
|
||||||
compatible = "regulator-fixed";
|
compatible = "regulator-fixed";
|
||||||
regulator-name = "usb_vbus";
|
regulator-name = "usb_vbus";
|
||||||
regulator-min-microvolt = <5000000>;
|
regulator-min-microvolt = <5000000>;
|
||||||
|
@ -60,7 +60,7 @@
|
||||||
enable-active-high;
|
enable-active-high;
|
||||||
};
|
};
|
||||||
|
|
||||||
usb_p0_vbus: regulator@1 {
|
usb_p0_vbus: regulator-usb-p0 {
|
||||||
compatible = "regulator-fixed";
|
compatible = "regulator-fixed";
|
||||||
regulator-name = "vbus";
|
regulator-name = "vbus";
|
||||||
regulator-min-microvolt = <5000000>;
|
regulator-min-microvolt = <5000000>;
|
||||||
|
|
|
@ -422,6 +422,29 @@ cpuid_feature_extract_unsigned_field(u64 features, int field)
|
||||||
return cpuid_feature_extract_unsigned_field_width(features, field, 4);
|
return cpuid_feature_extract_unsigned_field_width(features, field, 4);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Fields that identify the version of the Performance Monitors Extension do
|
||||||
|
* not follow the standard ID scheme. See ARM DDI 0487E.a page D13-2825,
|
||||||
|
* "Alternative ID scheme used for the Performance Monitors Extension version".
|
||||||
|
*/
|
||||||
|
static inline u64 __attribute_const__
|
||||||
|
cpuid_feature_cap_perfmon_field(u64 features, int field, u64 cap)
|
||||||
|
{
|
||||||
|
u64 val = cpuid_feature_extract_unsigned_field(features, field);
|
||||||
|
u64 mask = GENMASK_ULL(field + 3, field);
|
||||||
|
|
||||||
|
/* Treat IMPLEMENTATION DEFINED functionality as unimplemented */
|
||||||
|
if (val == 0xf)
|
||||||
|
val = 0;
|
||||||
|
|
||||||
|
if (val > cap) {
|
||||||
|
features &= ~mask;
|
||||||
|
features |= (cap << field) & mask;
|
||||||
|
}
|
||||||
|
|
||||||
|
return features;
|
||||||
|
}
|
||||||
|
|
||||||
static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
|
static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
|
||||||
{
|
{
|
||||||
return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
|
return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
|
||||||
|
|
|
@ -639,6 +639,12 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
|
||||||
if (pte_hw_dirty(pte))
|
if (pte_hw_dirty(pte))
|
||||||
pte = pte_mkdirty(pte);
|
pte = pte_mkdirty(pte);
|
||||||
pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask);
|
pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask);
|
||||||
|
/*
|
||||||
|
* If we end up clearing hw dirtiness for a sw-dirty PTE, set hardware
|
||||||
|
* dirtiness again.
|
||||||
|
*/
|
||||||
|
if (pte_sw_dirty(pte))
|
||||||
|
pte = pte_mkdirty(pte);
|
||||||
return pte;
|
return pte;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -632,6 +632,12 @@
|
||||||
#define ID_AA64DFR0_TRACEVER_SHIFT 4
|
#define ID_AA64DFR0_TRACEVER_SHIFT 4
|
||||||
#define ID_AA64DFR0_DEBUGVER_SHIFT 0
|
#define ID_AA64DFR0_DEBUGVER_SHIFT 0
|
||||||
|
|
||||||
|
#define ID_AA64DFR0_PMUVER_8_1 0x4
|
||||||
|
|
||||||
|
#define ID_DFR0_PERFMON_SHIFT 24
|
||||||
|
|
||||||
|
#define ID_DFR0_PERFMON_8_1 0x4
|
||||||
|
|
||||||
#define ID_ISAR5_RDM_SHIFT 24
|
#define ID_ISAR5_RDM_SHIFT 24
|
||||||
#define ID_ISAR5_CRC32_SHIFT 16
|
#define ID_ISAR5_CRC32_SHIFT 16
|
||||||
#define ID_ISAR5_SHA2_SHIFT 12
|
#define ID_ISAR5_SHA2_SHIFT 12
|
||||||
|
|
|
@ -1049,6 +1049,16 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
|
||||||
kvm_debug("LORegions unsupported for guests, suppressing\n");
|
kvm_debug("LORegions unsupported for guests, suppressing\n");
|
||||||
|
|
||||||
val &= ~(0xfUL << ID_AA64MMFR1_LOR_SHIFT);
|
val &= ~(0xfUL << ID_AA64MMFR1_LOR_SHIFT);
|
||||||
|
} else if (id == SYS_ID_AA64DFR0_EL1) {
|
||||||
|
/* Limit guests to PMUv3 for ARMv8.1 */
|
||||||
|
val = cpuid_feature_cap_perfmon_field(val,
|
||||||
|
ID_AA64DFR0_PMUVER_SHIFT,
|
||||||
|
ID_AA64DFR0_PMUVER_8_1);
|
||||||
|
} else if (id == SYS_ID_DFR0_EL1) {
|
||||||
|
/* Limit guests to PMUv3 for ARMv8.1 */
|
||||||
|
val = cpuid_feature_cap_perfmon_field(val,
|
||||||
|
ID_DFR0_PERFMON_SHIFT,
|
||||||
|
ID_DFR0_PERFMON_8_1);
|
||||||
}
|
}
|
||||||
|
|
||||||
return val;
|
return val;
|
||||||
|
|
|
@ -692,7 +692,7 @@ static int kvm_mips_map_page(struct kvm_vcpu *vcpu, unsigned long gpa,
|
||||||
gfn_t gfn = gpa >> PAGE_SHIFT;
|
gfn_t gfn = gpa >> PAGE_SHIFT;
|
||||||
int srcu_idx, err;
|
int srcu_idx, err;
|
||||||
kvm_pfn_t pfn;
|
kvm_pfn_t pfn;
|
||||||
pte_t *ptep, entry, old_pte;
|
pte_t *ptep, entry;
|
||||||
bool writeable;
|
bool writeable;
|
||||||
unsigned long prot_bits;
|
unsigned long prot_bits;
|
||||||
unsigned long mmu_seq;
|
unsigned long mmu_seq;
|
||||||
|
@ -765,7 +765,6 @@ retry:
|
||||||
entry = pfn_pte(pfn, __pgprot(prot_bits));
|
entry = pfn_pte(pfn, __pgprot(prot_bits));
|
||||||
|
|
||||||
/* Write the PTE */
|
/* Write the PTE */
|
||||||
old_pte = *ptep;
|
|
||||||
set_pte(ptep, entry);
|
set_pte(ptep, entry);
|
||||||
|
|
||||||
err = 0;
|
err = 0;
|
||||||
|
|
|
@ -29,6 +29,15 @@
|
||||||
#include <asm/feature-fixups.h>
|
#include <asm/feature-fixups.h>
|
||||||
|
|
||||||
#ifdef CONFIG_VSX
|
#ifdef CONFIG_VSX
|
||||||
|
#define __REST_1FPVSR(n,c,base) \
|
||||||
|
BEGIN_FTR_SECTION \
|
||||||
|
b 2f; \
|
||||||
|
END_FTR_SECTION_IFSET(CPU_FTR_VSX); \
|
||||||
|
REST_FPR(n,base); \
|
||||||
|
b 3f; \
|
||||||
|
2: REST_VSR(n,c,base); \
|
||||||
|
3:
|
||||||
|
|
||||||
#define __REST_32FPVSRS(n,c,base) \
|
#define __REST_32FPVSRS(n,c,base) \
|
||||||
BEGIN_FTR_SECTION \
|
BEGIN_FTR_SECTION \
|
||||||
b 2f; \
|
b 2f; \
|
||||||
|
@ -47,9 +56,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX); \
|
||||||
2: SAVE_32VSRS(n,c,base); \
|
2: SAVE_32VSRS(n,c,base); \
|
||||||
3:
|
3:
|
||||||
#else
|
#else
|
||||||
|
#define __REST_1FPVSR(n,b,base) REST_FPR(n, base)
|
||||||
#define __REST_32FPVSRS(n,b,base) REST_32FPRS(n, base)
|
#define __REST_32FPVSRS(n,b,base) REST_32FPRS(n, base)
|
||||||
#define __SAVE_32FPVSRS(n,b,base) SAVE_32FPRS(n, base)
|
#define __SAVE_32FPVSRS(n,b,base) SAVE_32FPRS(n, base)
|
||||||
#endif
|
#endif
|
||||||
|
#define REST_1FPVSR(n,c,base) __REST_1FPVSR(n,__REG_##c,__REG_##base)
|
||||||
#define REST_32FPVSRS(n,c,base) __REST_32FPVSRS(n,__REG_##c,__REG_##base)
|
#define REST_32FPVSRS(n,c,base) __REST_32FPVSRS(n,__REG_##c,__REG_##base)
|
||||||
#define SAVE_32FPVSRS(n,c,base) __SAVE_32FPVSRS(n,__REG_##c,__REG_##base)
|
#define SAVE_32FPVSRS(n,c,base) __SAVE_32FPVSRS(n,__REG_##c,__REG_##base)
|
||||||
|
|
||||||
|
@ -72,6 +83,7 @@ _GLOBAL(store_fp_state)
|
||||||
SAVE_32FPVSRS(0, R4, R3)
|
SAVE_32FPVSRS(0, R4, R3)
|
||||||
mffs fr0
|
mffs fr0
|
||||||
stfd fr0,FPSTATE_FPSCR(r3)
|
stfd fr0,FPSTATE_FPSCR(r3)
|
||||||
|
REST_1FPVSR(0, R4, R3)
|
||||||
blr
|
blr
|
||||||
EXPORT_SYMBOL(store_fp_state)
|
EXPORT_SYMBOL(store_fp_state)
|
||||||
|
|
||||||
|
@ -136,6 +148,7 @@ _GLOBAL(save_fpu)
|
||||||
2: SAVE_32FPVSRS(0, R4, R6)
|
2: SAVE_32FPVSRS(0, R4, R6)
|
||||||
mffs fr0
|
mffs fr0
|
||||||
stfd fr0,FPSTATE_FPSCR(r6)
|
stfd fr0,FPSTATE_FPSCR(r6)
|
||||||
|
REST_1FPVSR(0, R4, R6)
|
||||||
blr
|
blr
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -40,6 +40,9 @@ _GLOBAL(ftrace_regs_caller)
|
||||||
/* Save the original return address in A's stack frame */
|
/* Save the original return address in A's stack frame */
|
||||||
std r0,LRSAVE(r1)
|
std r0,LRSAVE(r1)
|
||||||
|
|
||||||
|
/* Create a minimal stack frame for representing B */
|
||||||
|
stdu r1, -STACK_FRAME_MIN_SIZE(r1)
|
||||||
|
|
||||||
/* Create our stack frame + pt_regs */
|
/* Create our stack frame + pt_regs */
|
||||||
stdu r1,-SWITCH_FRAME_SIZE(r1)
|
stdu r1,-SWITCH_FRAME_SIZE(r1)
|
||||||
|
|
||||||
|
@ -56,7 +59,7 @@ _GLOBAL(ftrace_regs_caller)
|
||||||
SAVE_10GPRS(22, r1)
|
SAVE_10GPRS(22, r1)
|
||||||
|
|
||||||
/* Save previous stack pointer (r1) */
|
/* Save previous stack pointer (r1) */
|
||||||
addi r8, r1, SWITCH_FRAME_SIZE
|
addi r8, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
|
||||||
std r8, GPR1(r1)
|
std r8, GPR1(r1)
|
||||||
|
|
||||||
/* Load special regs for save below */
|
/* Load special regs for save below */
|
||||||
|
@ -69,6 +72,8 @@ _GLOBAL(ftrace_regs_caller)
|
||||||
mflr r7
|
mflr r7
|
||||||
/* Save it as pt_regs->nip */
|
/* Save it as pt_regs->nip */
|
||||||
std r7, _NIP(r1)
|
std r7, _NIP(r1)
|
||||||
|
/* Also save it in B's stackframe header for proper unwind */
|
||||||
|
std r7, LRSAVE+SWITCH_FRAME_SIZE(r1)
|
||||||
/* Save the read LR in pt_regs->link */
|
/* Save the read LR in pt_regs->link */
|
||||||
std r0, _LINK(r1)
|
std r0, _LINK(r1)
|
||||||
|
|
||||||
|
@ -125,7 +130,7 @@ ftrace_regs_call:
|
||||||
ld r2, 24(r1)
|
ld r2, 24(r1)
|
||||||
|
|
||||||
/* Pop our stack frame */
|
/* Pop our stack frame */
|
||||||
addi r1, r1, SWITCH_FRAME_SIZE
|
addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
|
||||||
|
|
||||||
#ifdef CONFIG_LIVEPATCH
|
#ifdef CONFIG_LIVEPATCH
|
||||||
/* Based on the cmpd above, if the NIP was altered handle livepatch */
|
/* Based on the cmpd above, if the NIP was altered handle livepatch */
|
||||||
|
@ -149,7 +154,7 @@ ftrace_no_trace:
|
||||||
mflr r3
|
mflr r3
|
||||||
mtctr r3
|
mtctr r3
|
||||||
REST_GPR(3, r1)
|
REST_GPR(3, r1)
|
||||||
addi r1, r1, SWITCH_FRAME_SIZE
|
addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
|
||||||
mtlr r0
|
mtlr r0
|
||||||
bctr
|
bctr
|
||||||
|
|
||||||
|
@ -157,6 +162,9 @@ _GLOBAL(ftrace_caller)
|
||||||
/* Save the original return address in A's stack frame */
|
/* Save the original return address in A's stack frame */
|
||||||
std r0, LRSAVE(r1)
|
std r0, LRSAVE(r1)
|
||||||
|
|
||||||
|
/* Create a minimal stack frame for representing B */
|
||||||
|
stdu r1, -STACK_FRAME_MIN_SIZE(r1)
|
||||||
|
|
||||||
/* Create our stack frame + pt_regs */
|
/* Create our stack frame + pt_regs */
|
||||||
stdu r1, -SWITCH_FRAME_SIZE(r1)
|
stdu r1, -SWITCH_FRAME_SIZE(r1)
|
||||||
|
|
||||||
|
@ -170,6 +178,7 @@ _GLOBAL(ftrace_caller)
|
||||||
/* Get the _mcount() call site out of LR */
|
/* Get the _mcount() call site out of LR */
|
||||||
mflr r7
|
mflr r7
|
||||||
std r7, _NIP(r1)
|
std r7, _NIP(r1)
|
||||||
|
std r7, LRSAVE+SWITCH_FRAME_SIZE(r1)
|
||||||
|
|
||||||
/* Save callee's TOC in the ABI compliant location */
|
/* Save callee's TOC in the ABI compliant location */
|
||||||
std r2, 24(r1)
|
std r2, 24(r1)
|
||||||
|
@ -204,7 +213,7 @@ ftrace_call:
|
||||||
ld r2, 24(r1)
|
ld r2, 24(r1)
|
||||||
|
|
||||||
/* Pop our stack frame */
|
/* Pop our stack frame */
|
||||||
addi r1, r1, SWITCH_FRAME_SIZE
|
addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE
|
||||||
|
|
||||||
/* Reload original LR */
|
/* Reload original LR */
|
||||||
ld r0, LRSAVE(r1)
|
ld r0, LRSAVE(r1)
|
||||||
|
|
|
@ -31,6 +31,7 @@ _GLOBAL(store_vr_state)
|
||||||
mfvscr v0
|
mfvscr v0
|
||||||
li r4, VRSTATE_VSCR
|
li r4, VRSTATE_VSCR
|
||||||
stvx v0, r4, r3
|
stvx v0, r4, r3
|
||||||
|
lvx v0, 0, r3
|
||||||
blr
|
blr
|
||||||
EXPORT_SYMBOL(store_vr_state)
|
EXPORT_SYMBOL(store_vr_state)
|
||||||
|
|
||||||
|
@ -101,6 +102,7 @@ _GLOBAL(save_altivec)
|
||||||
mfvscr v0
|
mfvscr v0
|
||||||
li r4,VRSTATE_VSCR
|
li r4,VRSTATE_VSCR
|
||||||
stvx v0,r4,r7
|
stvx v0,r4,r7
|
||||||
|
lvx v0,0,r7
|
||||||
blr
|
blr
|
||||||
|
|
||||||
#ifdef CONFIG_VSX
|
#ifdef CONFIG_VSX
|
||||||
|
|
|
@ -76,7 +76,7 @@ static inline int test_fp_ctl(u32 fpc)
|
||||||
#define KERNEL_VXR_HIGH (KERNEL_VXR_V16V23|KERNEL_VXR_V24V31)
|
#define KERNEL_VXR_HIGH (KERNEL_VXR_V16V23|KERNEL_VXR_V24V31)
|
||||||
|
|
||||||
#define KERNEL_VXR (KERNEL_VXR_LOW|KERNEL_VXR_HIGH)
|
#define KERNEL_VXR (KERNEL_VXR_LOW|KERNEL_VXR_HIGH)
|
||||||
#define KERNEL_FPR (KERNEL_FPC|KERNEL_VXR_V0V7)
|
#define KERNEL_FPR (KERNEL_FPC|KERNEL_VXR_LOW)
|
||||||
|
|
||||||
struct kernel_fpu;
|
struct kernel_fpu;
|
||||||
|
|
||||||
|
|
|
@ -118,7 +118,7 @@ static void mark_kernel_pmd(pud_t *pud, unsigned long addr, unsigned long end)
|
||||||
next = pmd_addr_end(addr, end);
|
next = pmd_addr_end(addr, end);
|
||||||
if (pmd_none(*pmd) || pmd_large(*pmd))
|
if (pmd_none(*pmd) || pmd_large(*pmd))
|
||||||
continue;
|
continue;
|
||||||
page = virt_to_page(pmd_val(*pmd));
|
page = phys_to_page(pmd_val(*pmd));
|
||||||
set_bit(PG_arch_1, &page->flags);
|
set_bit(PG_arch_1, &page->flags);
|
||||||
} while (pmd++, addr = next, addr != end);
|
} while (pmd++, addr = next, addr != end);
|
||||||
}
|
}
|
||||||
|
@ -136,8 +136,8 @@ static void mark_kernel_pud(p4d_t *p4d, unsigned long addr, unsigned long end)
|
||||||
if (pud_none(*pud) || pud_large(*pud))
|
if (pud_none(*pud) || pud_large(*pud))
|
||||||
continue;
|
continue;
|
||||||
if (!pud_folded(*pud)) {
|
if (!pud_folded(*pud)) {
|
||||||
page = virt_to_page(pud_val(*pud));
|
page = phys_to_page(pud_val(*pud));
|
||||||
for (i = 0; i < 3; i++)
|
for (i = 0; i < 4; i++)
|
||||||
set_bit(PG_arch_1, &page[i].flags);
|
set_bit(PG_arch_1, &page[i].flags);
|
||||||
}
|
}
|
||||||
mark_kernel_pmd(pud, addr, next);
|
mark_kernel_pmd(pud, addr, next);
|
||||||
|
@ -157,8 +157,8 @@ static void mark_kernel_p4d(pgd_t *pgd, unsigned long addr, unsigned long end)
|
||||||
if (p4d_none(*p4d))
|
if (p4d_none(*p4d))
|
||||||
continue;
|
continue;
|
||||||
if (!p4d_folded(*p4d)) {
|
if (!p4d_folded(*p4d)) {
|
||||||
page = virt_to_page(p4d_val(*p4d));
|
page = phys_to_page(p4d_val(*p4d));
|
||||||
for (i = 0; i < 3; i++)
|
for (i = 0; i < 4; i++)
|
||||||
set_bit(PG_arch_1, &page[i].flags);
|
set_bit(PG_arch_1, &page[i].flags);
|
||||||
}
|
}
|
||||||
mark_kernel_pud(p4d, addr, next);
|
mark_kernel_pud(p4d, addr, next);
|
||||||
|
@ -179,8 +179,8 @@ static void mark_kernel_pgd(void)
|
||||||
if (pgd_none(*pgd))
|
if (pgd_none(*pgd))
|
||||||
continue;
|
continue;
|
||||||
if (!pgd_folded(*pgd)) {
|
if (!pgd_folded(*pgd)) {
|
||||||
page = virt_to_page(pgd_val(*pgd));
|
page = phys_to_page(pgd_val(*pgd));
|
||||||
for (i = 0; i < 3; i++)
|
for (i = 0; i < 4; i++)
|
||||||
set_bit(PG_arch_1, &page[i].flags);
|
set_bit(PG_arch_1, &page[i].flags);
|
||||||
}
|
}
|
||||||
mark_kernel_p4d(pgd, addr, next);
|
mark_kernel_p4d(pgd, addr, next);
|
||||||
|
|
|
@ -699,7 +699,7 @@ void ptep_zap_unused(struct mm_struct *mm, unsigned long addr,
|
||||||
pte_clear(mm, addr, ptep);
|
pte_clear(mm, addr, ptep);
|
||||||
}
|
}
|
||||||
if (reset)
|
if (reset)
|
||||||
pgste_val(pgste) &= ~_PGSTE_GPS_USAGE_MASK;
|
pgste_val(pgste) &= ~(_PGSTE_GPS_USAGE_MASK | _PGSTE_GPS_NODAT);
|
||||||
pgste_set_unlock(ptep, pgste);
|
pgste_set_unlock(ptep, pgste);
|
||||||
preempt_enable();
|
preempt_enable();
|
||||||
}
|
}
|
||||||
|
|
|
@ -690,8 +690,8 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode,
|
||||||
} else {
|
} else {
|
||||||
local_irq_save(flags);
|
local_irq_save(flags);
|
||||||
memcpy(addr, opcode, len);
|
memcpy(addr, opcode, len);
|
||||||
local_irq_restore(flags);
|
|
||||||
sync_core();
|
sync_core();
|
||||||
|
local_irq_restore(flags);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Could also do a CLFLUSH here to speed up CPU recovery; but
|
* Could also do a CLFLUSH here to speed up CPU recovery; but
|
||||||
|
|
|
@ -1253,5 +1253,8 @@ static void zenbleed_check_cpu(void *unused)
|
||||||
|
|
||||||
void amd_check_microcode(void)
|
void amd_check_microcode(void)
|
||||||
{
|
{
|
||||||
|
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
|
||||||
|
return;
|
||||||
|
|
||||||
on_each_cpu(zenbleed_check_cpu, NULL, 1);
|
on_each_cpu(zenbleed_check_cpu, NULL, 1);
|
||||||
}
|
}
|
||||||
|
|
|
@ -1383,6 +1383,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global)
|
||||||
tg_bps_limit(tg, READ), tg_bps_limit(tg, WRITE),
|
tg_bps_limit(tg, READ), tg_bps_limit(tg, WRITE),
|
||||||
tg_iops_limit(tg, READ), tg_iops_limit(tg, WRITE));
|
tg_iops_limit(tg, READ), tg_iops_limit(tg, WRITE));
|
||||||
|
|
||||||
|
rcu_read_lock();
|
||||||
/*
|
/*
|
||||||
* Update has_rules[] flags for the updated tg's subtree. A tg is
|
* Update has_rules[] flags for the updated tg's subtree. A tg is
|
||||||
* considered to have rules if either the tg itself or any of its
|
* considered to have rules if either the tg itself or any of its
|
||||||
|
@ -1410,6 +1411,7 @@ static void tg_conf_updated(struct throtl_grp *tg, bool global)
|
||||||
this_tg->latency_target = max(this_tg->latency_target,
|
this_tg->latency_target = max(this_tg->latency_target,
|
||||||
parent_tg->latency_target);
|
parent_tg->latency_target);
|
||||||
}
|
}
|
||||||
|
rcu_read_unlock();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We're already holding queue_lock and know @tg is valid. Let's
|
* We're already holding queue_lock and know @tg is valid. Let's
|
||||||
|
|
|
@ -81,6 +81,9 @@ static int isapnp_init_one(struct pnp_dev *idev, const struct pnp_device_id *dev
|
||||||
if (pnp_port_valid(idev, 1)) {
|
if (pnp_port_valid(idev, 1)) {
|
||||||
ctl_addr = devm_ioport_map(&idev->dev,
|
ctl_addr = devm_ioport_map(&idev->dev,
|
||||||
pnp_port_start(idev, 1), 1);
|
pnp_port_start(idev, 1), 1);
|
||||||
|
if (!ctl_addr)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
ap->ioaddr.altstatus_addr = ctl_addr;
|
ap->ioaddr.altstatus_addr = ctl_addr;
|
||||||
ap->ioaddr.ctl_addr = ctl_addr;
|
ap->ioaddr.ctl_addr = ctl_addr;
|
||||||
ap->ops = &isapnp_port_ops;
|
ap->ops = &isapnp_port_ops;
|
||||||
|
|
|
@ -458,9 +458,9 @@ static ssize_t console_show(struct device *dev, struct device_attribute *attr,
|
||||||
struct sk_buff *skb;
|
struct sk_buff *skb;
|
||||||
unsigned int len;
|
unsigned int len;
|
||||||
|
|
||||||
spin_lock(&card->cli_queue_lock);
|
spin_lock_bh(&card->cli_queue_lock);
|
||||||
skb = skb_dequeue(&card->cli_queue[SOLOS_CHAN(atmdev)]);
|
skb = skb_dequeue(&card->cli_queue[SOLOS_CHAN(atmdev)]);
|
||||||
spin_unlock(&card->cli_queue_lock);
|
spin_unlock_bh(&card->cli_queue_lock);
|
||||||
if(skb == NULL)
|
if(skb == NULL)
|
||||||
return sprintf(buf, "No data.\n");
|
return sprintf(buf, "No data.\n");
|
||||||
|
|
||||||
|
@ -968,14 +968,14 @@ static void pclose(struct atm_vcc *vcc)
|
||||||
struct pkt_hdr *header;
|
struct pkt_hdr *header;
|
||||||
|
|
||||||
/* Remove any yet-to-be-transmitted packets from the pending queue */
|
/* Remove any yet-to-be-transmitted packets from the pending queue */
|
||||||
spin_lock(&card->tx_queue_lock);
|
spin_lock_bh(&card->tx_queue_lock);
|
||||||
skb_queue_walk_safe(&card->tx_queue[port], skb, tmpskb) {
|
skb_queue_walk_safe(&card->tx_queue[port], skb, tmpskb) {
|
||||||
if (SKB_CB(skb)->vcc == vcc) {
|
if (SKB_CB(skb)->vcc == vcc) {
|
||||||
skb_unlink(skb, &card->tx_queue[port]);
|
skb_unlink(skb, &card->tx_queue[port]);
|
||||||
solos_pop(vcc, skb);
|
solos_pop(vcc, skb);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
spin_unlock(&card->tx_queue_lock);
|
spin_unlock_bh(&card->tx_queue_lock);
|
||||||
|
|
||||||
skb = alloc_skb(sizeof(*header), GFP_KERNEL);
|
skb = alloc_skb(sizeof(*header), GFP_KERNEL);
|
||||||
if (!skb) {
|
if (!skb) {
|
||||||
|
|
|
@ -1016,8 +1016,6 @@ static void __device_release_driver(struct device *dev, struct device *parent)
|
||||||
else if (drv->remove)
|
else if (drv->remove)
|
||||||
drv->remove(dev);
|
drv->remove(dev);
|
||||||
|
|
||||||
device_links_driver_cleanup(dev);
|
|
||||||
|
|
||||||
devres_release_all(dev);
|
devres_release_all(dev);
|
||||||
dma_deconfigure(dev);
|
dma_deconfigure(dev);
|
||||||
dev->driver = NULL;
|
dev->driver = NULL;
|
||||||
|
@ -1027,6 +1025,8 @@ static void __device_release_driver(struct device *dev, struct device *parent)
|
||||||
pm_runtime_reinit(dev);
|
pm_runtime_reinit(dev);
|
||||||
dev_pm_set_driver_flags(dev, 0);
|
dev_pm_set_driver_flags(dev, 0);
|
||||||
|
|
||||||
|
device_links_driver_cleanup(dev);
|
||||||
|
|
||||||
klist_remove(&dev->p->knode_driver);
|
klist_remove(&dev->p->knode_driver);
|
||||||
device_pm_check_callbacks(dev);
|
device_pm_check_callbacks(dev);
|
||||||
if (dev->bus)
|
if (dev->bus)
|
||||||
|
|
|
@ -29,6 +29,47 @@ struct devcd_entry {
|
||||||
struct device devcd_dev;
|
struct device devcd_dev;
|
||||||
void *data;
|
void *data;
|
||||||
size_t datalen;
|
size_t datalen;
|
||||||
|
/*
|
||||||
|
* Here, mutex is required to serialize the calls to del_wk work between
|
||||||
|
* user/kernel space which happens when devcd is added with device_add()
|
||||||
|
* and that sends uevent to user space. User space reads the uevents,
|
||||||
|
* and calls to devcd_data_write() which try to modify the work which is
|
||||||
|
* not even initialized/queued from devcoredump.
|
||||||
|
*
|
||||||
|
*
|
||||||
|
*
|
||||||
|
* cpu0(X) cpu1(Y)
|
||||||
|
*
|
||||||
|
* dev_coredump() uevent sent to user space
|
||||||
|
* device_add() ======================> user space process Y reads the
|
||||||
|
* uevents writes to devcd fd
|
||||||
|
* which results into writes to
|
||||||
|
*
|
||||||
|
* devcd_data_write()
|
||||||
|
* mod_delayed_work()
|
||||||
|
* try_to_grab_pending()
|
||||||
|
* del_timer()
|
||||||
|
* debug_assert_init()
|
||||||
|
* INIT_DELAYED_WORK()
|
||||||
|
* schedule_delayed_work()
|
||||||
|
*
|
||||||
|
*
|
||||||
|
* Also, mutex alone would not be enough to avoid scheduling of
|
||||||
|
* del_wk work after it get flush from a call to devcd_free()
|
||||||
|
* mentioned as below.
|
||||||
|
*
|
||||||
|
* disabled_store()
|
||||||
|
* devcd_free()
|
||||||
|
* mutex_lock() devcd_data_write()
|
||||||
|
* flush_delayed_work()
|
||||||
|
* mutex_unlock()
|
||||||
|
* mutex_lock()
|
||||||
|
* mod_delayed_work()
|
||||||
|
* mutex_unlock()
|
||||||
|
* So, delete_work flag is required.
|
||||||
|
*/
|
||||||
|
struct mutex mutex;
|
||||||
|
bool delete_work;
|
||||||
struct module *owner;
|
struct module *owner;
|
||||||
ssize_t (*read)(char *buffer, loff_t offset, size_t count,
|
ssize_t (*read)(char *buffer, loff_t offset, size_t count,
|
||||||
void *data, size_t datalen);
|
void *data, size_t datalen);
|
||||||
|
@ -88,7 +129,12 @@ static ssize_t devcd_data_write(struct file *filp, struct kobject *kobj,
|
||||||
struct device *dev = kobj_to_dev(kobj);
|
struct device *dev = kobj_to_dev(kobj);
|
||||||
struct devcd_entry *devcd = dev_to_devcd(dev);
|
struct devcd_entry *devcd = dev_to_devcd(dev);
|
||||||
|
|
||||||
mod_delayed_work(system_wq, &devcd->del_wk, 0);
|
mutex_lock(&devcd->mutex);
|
||||||
|
if (!devcd->delete_work) {
|
||||||
|
devcd->delete_work = true;
|
||||||
|
mod_delayed_work(system_wq, &devcd->del_wk, 0);
|
||||||
|
}
|
||||||
|
mutex_unlock(&devcd->mutex);
|
||||||
|
|
||||||
return count;
|
return count;
|
||||||
}
|
}
|
||||||
|
@ -116,7 +162,12 @@ static int devcd_free(struct device *dev, void *data)
|
||||||
{
|
{
|
||||||
struct devcd_entry *devcd = dev_to_devcd(dev);
|
struct devcd_entry *devcd = dev_to_devcd(dev);
|
||||||
|
|
||||||
|
mutex_lock(&devcd->mutex);
|
||||||
|
if (!devcd->delete_work)
|
||||||
|
devcd->delete_work = true;
|
||||||
|
|
||||||
flush_delayed_work(&devcd->del_wk);
|
flush_delayed_work(&devcd->del_wk);
|
||||||
|
mutex_unlock(&devcd->mutex);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -126,6 +177,30 @@ static ssize_t disabled_show(struct class *class, struct class_attribute *attr,
|
||||||
return sprintf(buf, "%d\n", devcd_disabled);
|
return sprintf(buf, "%d\n", devcd_disabled);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
*
|
||||||
|
* disabled_store() worker()
|
||||||
|
* class_for_each_device(&devcd_class,
|
||||||
|
* NULL, NULL, devcd_free)
|
||||||
|
* ...
|
||||||
|
* ...
|
||||||
|
* while ((dev = class_dev_iter_next(&iter))
|
||||||
|
* devcd_del()
|
||||||
|
* device_del()
|
||||||
|
* put_device() <- last reference
|
||||||
|
* error = fn(dev, data) devcd_dev_release()
|
||||||
|
* devcd_free(dev, data) kfree(devcd)
|
||||||
|
* mutex_lock(&devcd->mutex);
|
||||||
|
*
|
||||||
|
*
|
||||||
|
* In the above diagram, It looks like disabled_store() would be racing with parallely
|
||||||
|
* running devcd_del() and result in memory abort while acquiring devcd->mutex which
|
||||||
|
* is called after kfree of devcd memory after dropping its last reference with
|
||||||
|
* put_device(). However, this will not happens as fn(dev, data) runs
|
||||||
|
* with its own reference to device via klist_node so it is not its last reference.
|
||||||
|
* so, above situation would not occur.
|
||||||
|
*/
|
||||||
|
|
||||||
static ssize_t disabled_store(struct class *class, struct class_attribute *attr,
|
static ssize_t disabled_store(struct class *class, struct class_attribute *attr,
|
||||||
const char *buf, size_t count)
|
const char *buf, size_t count)
|
||||||
{
|
{
|
||||||
|
@ -291,13 +366,17 @@ void dev_coredumpm(struct device *dev, struct module *owner,
|
||||||
devcd->read = read;
|
devcd->read = read;
|
||||||
devcd->free = free;
|
devcd->free = free;
|
||||||
devcd->failing_dev = get_device(dev);
|
devcd->failing_dev = get_device(dev);
|
||||||
|
devcd->delete_work = false;
|
||||||
|
|
||||||
|
mutex_init(&devcd->mutex);
|
||||||
device_initialize(&devcd->devcd_dev);
|
device_initialize(&devcd->devcd_dev);
|
||||||
|
|
||||||
dev_set_name(&devcd->devcd_dev, "devcd%d",
|
dev_set_name(&devcd->devcd_dev, "devcd%d",
|
||||||
atomic_inc_return(&devcd_count));
|
atomic_inc_return(&devcd_count));
|
||||||
devcd->devcd_dev.class = &devcd_class;
|
devcd->devcd_dev.class = &devcd_class;
|
||||||
|
|
||||||
|
mutex_lock(&devcd->mutex);
|
||||||
|
dev_set_uevent_suppress(&devcd->devcd_dev, true);
|
||||||
if (device_add(&devcd->devcd_dev))
|
if (device_add(&devcd->devcd_dev))
|
||||||
goto put_device;
|
goto put_device;
|
||||||
|
|
||||||
|
@ -309,12 +388,15 @@ void dev_coredumpm(struct device *dev, struct module *owner,
|
||||||
"devcoredump"))
|
"devcoredump"))
|
||||||
/* nothing - symlink will be missing */;
|
/* nothing - symlink will be missing */;
|
||||||
|
|
||||||
|
dev_set_uevent_suppress(&devcd->devcd_dev, false);
|
||||||
|
kobject_uevent(&devcd->devcd_dev.kobj, KOBJ_ADD);
|
||||||
INIT_DELAYED_WORK(&devcd->del_wk, devcd_del);
|
INIT_DELAYED_WORK(&devcd->del_wk, devcd_del);
|
||||||
schedule_delayed_work(&devcd->del_wk, DEVCD_TIMEOUT);
|
schedule_delayed_work(&devcd->del_wk, DEVCD_TIMEOUT);
|
||||||
|
mutex_unlock(&devcd->mutex);
|
||||||
return;
|
return;
|
||||||
put_device:
|
put_device:
|
||||||
put_device(&devcd->devcd_dev);
|
put_device(&devcd->devcd_dev);
|
||||||
|
mutex_unlock(&devcd->mutex);
|
||||||
put_module:
|
put_module:
|
||||||
module_put(owner);
|
module_put(owner);
|
||||||
free:
|
free:
|
||||||
|
|
|
@ -240,6 +240,14 @@ static struct cpufreq_driver imx6q_cpufreq_driver = {
|
||||||
.suspend = cpufreq_generic_suspend,
|
.suspend = cpufreq_generic_suspend,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static void imx6x_disable_freq_in_opp(struct device *dev, unsigned long freq)
|
||||||
|
{
|
||||||
|
int ret = dev_pm_opp_disable(dev, freq);
|
||||||
|
|
||||||
|
if (ret < 0 && ret != -ENODEV)
|
||||||
|
dev_warn(dev, "failed to disable %ldMHz OPP\n", freq / 1000000);
|
||||||
|
}
|
||||||
|
|
||||||
#define OCOTP_CFG3 0x440
|
#define OCOTP_CFG3 0x440
|
||||||
#define OCOTP_CFG3_SPEED_SHIFT 16
|
#define OCOTP_CFG3_SPEED_SHIFT 16
|
||||||
#define OCOTP_CFG3_SPEED_1P2GHZ 0x3
|
#define OCOTP_CFG3_SPEED_1P2GHZ 0x3
|
||||||
|
@ -275,17 +283,15 @@ static void imx6q_opp_check_speed_grading(struct device *dev)
|
||||||
val &= 0x3;
|
val &= 0x3;
|
||||||
|
|
||||||
if (val < OCOTP_CFG3_SPEED_996MHZ)
|
if (val < OCOTP_CFG3_SPEED_996MHZ)
|
||||||
if (dev_pm_opp_disable(dev, 996000000))
|
imx6x_disable_freq_in_opp(dev, 996000000);
|
||||||
dev_warn(dev, "failed to disable 996MHz OPP\n");
|
|
||||||
|
|
||||||
if (of_machine_is_compatible("fsl,imx6q") ||
|
if (of_machine_is_compatible("fsl,imx6q") ||
|
||||||
of_machine_is_compatible("fsl,imx6qp")) {
|
of_machine_is_compatible("fsl,imx6qp")) {
|
||||||
if (val != OCOTP_CFG3_SPEED_852MHZ)
|
if (val != OCOTP_CFG3_SPEED_852MHZ)
|
||||||
if (dev_pm_opp_disable(dev, 852000000))
|
imx6x_disable_freq_in_opp(dev, 852000000);
|
||||||
dev_warn(dev, "failed to disable 852MHz OPP\n");
|
|
||||||
if (val != OCOTP_CFG3_SPEED_1P2GHZ)
|
if (val != OCOTP_CFG3_SPEED_1P2GHZ)
|
||||||
if (dev_pm_opp_disable(dev, 1200000000))
|
imx6x_disable_freq_in_opp(dev, 1200000000);
|
||||||
dev_warn(dev, "failed to disable 1.2GHz OPP\n");
|
|
||||||
}
|
}
|
||||||
iounmap(base);
|
iounmap(base);
|
||||||
put_node:
|
put_node:
|
||||||
|
@ -338,20 +344,16 @@ static int imx6ul_opp_check_speed_grading(struct device *dev)
|
||||||
val >>= OCOTP_CFG3_SPEED_SHIFT;
|
val >>= OCOTP_CFG3_SPEED_SHIFT;
|
||||||
val &= 0x3;
|
val &= 0x3;
|
||||||
|
|
||||||
if (of_machine_is_compatible("fsl,imx6ul")) {
|
if (of_machine_is_compatible("fsl,imx6ul"))
|
||||||
if (val != OCOTP_CFG3_6UL_SPEED_696MHZ)
|
if (val != OCOTP_CFG3_6UL_SPEED_696MHZ)
|
||||||
if (dev_pm_opp_disable(dev, 696000000))
|
imx6x_disable_freq_in_opp(dev, 696000000);
|
||||||
dev_warn(dev, "failed to disable 696MHz OPP\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (of_machine_is_compatible("fsl,imx6ull")) {
|
if (of_machine_is_compatible("fsl,imx6ull")) {
|
||||||
if (val != OCOTP_CFG3_6ULL_SPEED_792MHZ)
|
if (val < OCOTP_CFG3_6ULL_SPEED_792MHZ)
|
||||||
if (dev_pm_opp_disable(dev, 792000000))
|
imx6x_disable_freq_in_opp(dev, 792000000);
|
||||||
dev_warn(dev, "failed to disable 792MHz OPP\n");
|
|
||||||
|
|
||||||
if (val != OCOTP_CFG3_6ULL_SPEED_900MHZ)
|
if (val != OCOTP_CFG3_6ULL_SPEED_900MHZ)
|
||||||
if (dev_pm_opp_disable(dev, 900000000))
|
imx6x_disable_freq_in_opp(dev, 900000000);
|
||||||
dev_warn(dev, "failed to disable 900MHz OPP\n");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
|
|
@ -732,14 +732,11 @@ static void create_units(struct fw_device *device)
|
||||||
fw_unit_attributes,
|
fw_unit_attributes,
|
||||||
&unit->attribute_group);
|
&unit->attribute_group);
|
||||||
|
|
||||||
if (device_register(&unit->device) < 0)
|
|
||||||
goto skip_unit;
|
|
||||||
|
|
||||||
fw_device_get(device);
|
fw_device_get(device);
|
||||||
continue;
|
if (device_register(&unit->device) < 0) {
|
||||||
|
put_device(&unit->device);
|
||||||
skip_unit:
|
continue;
|
||||||
kfree(unit);
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -495,14 +495,17 @@ static ssize_t export_store(struct class *class,
|
||||||
}
|
}
|
||||||
|
|
||||||
status = gpiod_set_transitory(desc, false);
|
status = gpiod_set_transitory(desc, false);
|
||||||
if (!status) {
|
if (status) {
|
||||||
status = gpiod_export(desc, true);
|
gpiod_free(desc);
|
||||||
if (status < 0)
|
goto done;
|
||||||
gpiod_free(desc);
|
|
||||||
else
|
|
||||||
set_bit(FLAG_SYSFS, &desc->flags);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
status = gpiod_export(desc, true);
|
||||||
|
if (status < 0)
|
||||||
|
gpiod_free(desc);
|
||||||
|
else
|
||||||
|
set_bit(FLAG_SYSFS, &desc->flags);
|
||||||
|
|
||||||
done:
|
done:
|
||||||
if (status)
|
if (status)
|
||||||
pr_debug("%s: status %d\n", __func__, status);
|
pr_debug("%s: status %d\n", __func__, status);
|
||||||
|
|
|
@ -147,7 +147,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
|
||||||
}
|
}
|
||||||
|
|
||||||
for (i = 0; i < p->nchunks; i++) {
|
for (i = 0; i < p->nchunks; i++) {
|
||||||
struct drm_amdgpu_cs_chunk __user **chunk_ptr = NULL;
|
struct drm_amdgpu_cs_chunk __user *chunk_ptr = NULL;
|
||||||
struct drm_amdgpu_cs_chunk user_chunk;
|
struct drm_amdgpu_cs_chunk user_chunk;
|
||||||
uint32_t __user *cdata;
|
uint32_t __user *cdata;
|
||||||
|
|
||||||
|
|
|
@ -1261,13 +1261,13 @@ static const struct panel_desc innolux_g070y2_l01 = {
|
||||||
static const struct display_timing innolux_g101ice_l01_timing = {
|
static const struct display_timing innolux_g101ice_l01_timing = {
|
||||||
.pixelclock = { 60400000, 71100000, 74700000 },
|
.pixelclock = { 60400000, 71100000, 74700000 },
|
||||||
.hactive = { 1280, 1280, 1280 },
|
.hactive = { 1280, 1280, 1280 },
|
||||||
.hfront_porch = { 41, 80, 100 },
|
.hfront_porch = { 30, 60, 70 },
|
||||||
.hback_porch = { 40, 79, 99 },
|
.hback_porch = { 30, 60, 70 },
|
||||||
.hsync_len = { 1, 1, 1 },
|
.hsync_len = { 22, 40, 60 },
|
||||||
.vactive = { 800, 800, 800 },
|
.vactive = { 800, 800, 800 },
|
||||||
.vfront_porch = { 5, 11, 14 },
|
.vfront_porch = { 3, 8, 14 },
|
||||||
.vback_porch = { 4, 11, 14 },
|
.vback_porch = { 3, 8, 14 },
|
||||||
.vsync_len = { 1, 1, 1 },
|
.vsync_len = { 4, 7, 12 },
|
||||||
.flags = DISPLAY_FLAGS_DE_HIGH,
|
.flags = DISPLAY_FLAGS_DE_HIGH,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -204,14 +204,22 @@ static inline void vop_cfg_done(struct vop *vop)
|
||||||
VOP_REG_SET(vop, common, cfg_done, 1);
|
VOP_REG_SET(vop, common, cfg_done, 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool has_rb_swapped(uint32_t format)
|
static bool has_rb_swapped(uint32_t version, uint32_t format)
|
||||||
{
|
{
|
||||||
switch (format) {
|
switch (format) {
|
||||||
case DRM_FORMAT_XBGR8888:
|
case DRM_FORMAT_XBGR8888:
|
||||||
case DRM_FORMAT_ABGR8888:
|
case DRM_FORMAT_ABGR8888:
|
||||||
case DRM_FORMAT_BGR888:
|
|
||||||
case DRM_FORMAT_BGR565:
|
case DRM_FORMAT_BGR565:
|
||||||
return true;
|
return true;
|
||||||
|
/*
|
||||||
|
* full framework (IP version 3.x) only need rb swapped for RGB888 and
|
||||||
|
* little framework (IP version 2.x) only need rb swapped for BGR888,
|
||||||
|
* check for 3.x to also only rb swap BGR888 for unknown vop version
|
||||||
|
*/
|
||||||
|
case DRM_FORMAT_RGB888:
|
||||||
|
return VOP_MAJOR(version) == 3;
|
||||||
|
case DRM_FORMAT_BGR888:
|
||||||
|
return VOP_MAJOR(version) != 3;
|
||||||
default:
|
default:
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
@ -798,7 +806,7 @@ static void vop_plane_atomic_update(struct drm_plane *plane,
|
||||||
VOP_WIN_SET(vop, win, dsp_info, dsp_info);
|
VOP_WIN_SET(vop, win, dsp_info, dsp_info);
|
||||||
VOP_WIN_SET(vop, win, dsp_st, dsp_st);
|
VOP_WIN_SET(vop, win, dsp_st, dsp_st);
|
||||||
|
|
||||||
rb_swap = has_rb_swapped(fb->format->format);
|
rb_swap = has_rb_swapped(vop->data->version, fb->format->format);
|
||||||
VOP_WIN_SET(vop, win, rb_swap, rb_swap);
|
VOP_WIN_SET(vop, win, rb_swap, rb_swap);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -252,7 +252,7 @@ static int asus_raw_event(struct hid_device *hdev,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int asus_kbd_set_report(struct hid_device *hdev, u8 *buf, size_t buf_size)
|
static int asus_kbd_set_report(struct hid_device *hdev, const u8 *buf, size_t buf_size)
|
||||||
{
|
{
|
||||||
unsigned char *dmabuf;
|
unsigned char *dmabuf;
|
||||||
int ret;
|
int ret;
|
||||||
|
@ -271,7 +271,7 @@ static int asus_kbd_set_report(struct hid_device *hdev, u8 *buf, size_t buf_size
|
||||||
|
|
||||||
static int asus_kbd_init(struct hid_device *hdev)
|
static int asus_kbd_init(struct hid_device *hdev)
|
||||||
{
|
{
|
||||||
u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x41, 0x53, 0x55, 0x53, 0x20, 0x54,
|
const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x41, 0x53, 0x55, 0x53, 0x20, 0x54,
|
||||||
0x65, 0x63, 0x68, 0x2e, 0x49, 0x6e, 0x63, 0x2e, 0x00 };
|
0x65, 0x63, 0x68, 0x2e, 0x49, 0x6e, 0x63, 0x2e, 0x00 };
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
@ -285,7 +285,7 @@ static int asus_kbd_init(struct hid_device *hdev)
|
||||||
static int asus_kbd_get_functions(struct hid_device *hdev,
|
static int asus_kbd_get_functions(struct hid_device *hdev,
|
||||||
unsigned char *kbd_func)
|
unsigned char *kbd_func)
|
||||||
{
|
{
|
||||||
u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x05, 0x20, 0x31, 0x00, 0x08 };
|
const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x05, 0x20, 0x31, 0x00, 0x08 };
|
||||||
u8 *readbuf;
|
u8 *readbuf;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
@ -614,6 +614,24 @@ static int asus_start_multitouch(struct hid_device *hdev)
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int __maybe_unused asus_resume(struct hid_device *hdev) {
|
||||||
|
struct asus_drvdata *drvdata = hid_get_drvdata(hdev);
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
|
if (drvdata->kbd_backlight) {
|
||||||
|
const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0xba, 0xc5, 0xc4,
|
||||||
|
drvdata->kbd_backlight->cdev.brightness };
|
||||||
|
ret = asus_kbd_set_report(hdev, buf, sizeof(buf));
|
||||||
|
if (ret < 0) {
|
||||||
|
hid_err(hdev, "Asus failed to set keyboard backlight: %d\n", ret);
|
||||||
|
goto asus_resume_err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
asus_resume_err:
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
static int __maybe_unused asus_reset_resume(struct hid_device *hdev)
|
static int __maybe_unused asus_reset_resume(struct hid_device *hdev)
|
||||||
{
|
{
|
||||||
struct asus_drvdata *drvdata = hid_get_drvdata(hdev);
|
struct asus_drvdata *drvdata = hid_get_drvdata(hdev);
|
||||||
|
@ -831,6 +849,7 @@ static struct hid_driver asus_driver = {
|
||||||
.input_configured = asus_input_configured,
|
.input_configured = asus_input_configured,
|
||||||
#ifdef CONFIG_PM
|
#ifdef CONFIG_PM
|
||||||
.reset_resume = asus_reset_resume,
|
.reset_resume = asus_reset_resume,
|
||||||
|
.resume = asus_resume,
|
||||||
#endif
|
#endif
|
||||||
.raw_event = asus_raw_event
|
.raw_event = asus_raw_event
|
||||||
};
|
};
|
||||||
|
|
|
@ -701,15 +701,22 @@ static void hid_close_report(struct hid_device *device)
|
||||||
* Free a device structure, all reports, and all fields.
|
* Free a device structure, all reports, and all fields.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
static void hid_device_release(struct device *dev)
|
void hiddev_free(struct kref *ref)
|
||||||
{
|
{
|
||||||
struct hid_device *hid = to_hid_device(dev);
|
struct hid_device *hid = container_of(ref, struct hid_device, ref);
|
||||||
|
|
||||||
hid_close_report(hid);
|
hid_close_report(hid);
|
||||||
kfree(hid->dev_rdesc);
|
kfree(hid->dev_rdesc);
|
||||||
kfree(hid);
|
kfree(hid);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void hid_device_release(struct device *dev)
|
||||||
|
{
|
||||||
|
struct hid_device *hid = to_hid_device(dev);
|
||||||
|
|
||||||
|
kref_put(&hid->ref, hiddev_free);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Fetch a report description item from the data stream. We support long
|
* Fetch a report description item from the data stream. We support long
|
||||||
* items, though they are not used yet.
|
* items, though they are not used yet.
|
||||||
|
@ -2259,10 +2266,12 @@ int hid_add_device(struct hid_device *hdev)
|
||||||
hid_warn(hdev, "bad device descriptor (%d)\n", ret);
|
hid_warn(hdev, "bad device descriptor (%d)\n", ret);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
hdev->id = atomic_inc_return(&id);
|
||||||
|
|
||||||
/* XXX hack, any other cleaner solution after the driver core
|
/* XXX hack, any other cleaner solution after the driver core
|
||||||
* is converted to allow more than 20 bytes as the device name? */
|
* is converted to allow more than 20 bytes as the device name? */
|
||||||
dev_set_name(&hdev->dev, "%04X:%04X:%04X.%04X", hdev->bus,
|
dev_set_name(&hdev->dev, "%04X:%04X:%04X.%04X", hdev->bus,
|
||||||
hdev->vendor, hdev->product, atomic_inc_return(&id));
|
hdev->vendor, hdev->product, hdev->id);
|
||||||
|
|
||||||
hid_debug_register(hdev, dev_name(&hdev->dev));
|
hid_debug_register(hdev, dev_name(&hdev->dev));
|
||||||
ret = device_add(&hdev->dev);
|
ret = device_add(&hdev->dev);
|
||||||
|
@ -2305,6 +2314,7 @@ struct hid_device *hid_allocate_device(void)
|
||||||
spin_lock_init(&hdev->debug_list_lock);
|
spin_lock_init(&hdev->debug_list_lock);
|
||||||
sema_init(&hdev->driver_input_lock, 1);
|
sema_init(&hdev->driver_input_lock, 1);
|
||||||
mutex_init(&hdev->ll_open_lock);
|
mutex_init(&hdev->ll_open_lock);
|
||||||
|
kref_init(&hdev->ref);
|
||||||
|
|
||||||
return hdev;
|
return hdev;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1096,6 +1096,7 @@ static int hid_debug_events_open(struct inode *inode, struct file *file)
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
list->hdev = (struct hid_device *) inode->i_private;
|
list->hdev = (struct hid_device *) inode->i_private;
|
||||||
|
kref_get(&list->hdev->ref);
|
||||||
file->private_data = list;
|
file->private_data = list;
|
||||||
mutex_init(&list->read_mutex);
|
mutex_init(&list->read_mutex);
|
||||||
|
|
||||||
|
@ -1188,6 +1189,8 @@ static int hid_debug_events_release(struct inode *inode, struct file *file)
|
||||||
list_del(&list->node);
|
list_del(&list->node);
|
||||||
spin_unlock_irqrestore(&list->hdev->debug_list_lock, flags);
|
spin_unlock_irqrestore(&list->hdev->debug_list_lock, flags);
|
||||||
kfifo_free(&list->hid_debug_fifo);
|
kfifo_free(&list->hid_debug_fifo);
|
||||||
|
|
||||||
|
kref_put(&list->hdev->ref, hiddev_free);
|
||||||
kfree(list);
|
kfree(list);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
|
@ -1981,6 +1981,11 @@ static const struct hid_device_id mt_devices[] = {
|
||||||
MT_USB_DEVICE(USB_VENDOR_ID_HANVON_ALT,
|
MT_USB_DEVICE(USB_VENDOR_ID_HANVON_ALT,
|
||||||
USB_DEVICE_ID_HANVON_ALT_MULTITOUCH) },
|
USB_DEVICE_ID_HANVON_ALT_MULTITOUCH) },
|
||||||
|
|
||||||
|
/* HONOR GLO-GXXX panel */
|
||||||
|
{ .driver_data = MT_CLS_VTL,
|
||||||
|
HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
|
||||||
|
0x347d, 0x7853) },
|
||||||
|
|
||||||
/* Ilitek dual touch panel */
|
/* Ilitek dual touch panel */
|
||||||
{ .driver_data = MT_CLS_NSMU,
|
{ .driver_data = MT_CLS_NSMU,
|
||||||
MT_USB_DEVICE(USB_VENDOR_ID_ILITEK,
|
MT_USB_DEVICE(USB_VENDOR_ID_ILITEK,
|
||||||
|
|
|
@ -35,6 +35,7 @@ static const struct hid_device_id hid_quirks[] = {
|
||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_AKAI, USB_DEVICE_ID_AKAI_MPKMINI2), HID_QUIRK_NO_INIT_REPORTS },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_AKAI, USB_DEVICE_ID_AKAI_MPKMINI2), HID_QUIRK_NO_INIT_REPORTS },
|
||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ALPS, USB_DEVICE_ID_IBM_GAMEPAD), HID_QUIRK_BADPAD },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_ALPS, USB_DEVICE_ID_IBM_GAMEPAD), HID_QUIRK_BADPAD },
|
||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_AMI, USB_DEVICE_ID_AMI_VIRT_KEYBOARD_AND_MOUSE), HID_QUIRK_ALWAYS_POLL },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_AMI, USB_DEVICE_ID_AMI_VIRT_KEYBOARD_AND_MOUSE), HID_QUIRK_ALWAYS_POLL },
|
||||||
|
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_REVB_ANSI), HID_QUIRK_ALWAYS_POLL },
|
||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_2PORTKVM), HID_QUIRK_NOGET },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_2PORTKVM), HID_QUIRK_NOGET },
|
||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVMC), HID_QUIRK_NOGET },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVMC), HID_QUIRK_NOGET },
|
||||||
{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVM), HID_QUIRK_NOGET },
|
{ HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVM), HID_QUIRK_NOGET },
|
||||||
|
|
|
@ -45,6 +45,7 @@ ACPI_MODULE_NAME(ACPI_POWER_METER_NAME);
|
||||||
#define POWER_METER_CAN_NOTIFY (1 << 3)
|
#define POWER_METER_CAN_NOTIFY (1 << 3)
|
||||||
#define POWER_METER_IS_BATTERY (1 << 8)
|
#define POWER_METER_IS_BATTERY (1 << 8)
|
||||||
#define UNKNOWN_HYSTERESIS 0xFFFFFFFF
|
#define UNKNOWN_HYSTERESIS 0xFFFFFFFF
|
||||||
|
#define UNKNOWN_POWER 0xFFFFFFFF
|
||||||
|
|
||||||
#define METER_NOTIFY_CONFIG 0x80
|
#define METER_NOTIFY_CONFIG 0x80
|
||||||
#define METER_NOTIFY_TRIP 0x81
|
#define METER_NOTIFY_TRIP 0x81
|
||||||
|
@ -356,6 +357,9 @@ static ssize_t show_power(struct device *dev,
|
||||||
update_meter(resource);
|
update_meter(resource);
|
||||||
mutex_unlock(&resource->lock);
|
mutex_unlock(&resource->lock);
|
||||||
|
|
||||||
|
if (resource->power == UNKNOWN_POWER)
|
||||||
|
return -ENODATA;
|
||||||
|
|
||||||
return sprintf(buf, "%llu\n", resource->power * 1000);
|
return sprintf(buf, "%llu\n", resource->power * 1000);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -16,8 +16,8 @@
|
||||||
/* Conversion times in us */
|
/* Conversion times in us */
|
||||||
static const u16 ms_sensors_ht_t_conversion_time[] = { 50000, 25000,
|
static const u16 ms_sensors_ht_t_conversion_time[] = { 50000, 25000,
|
||||||
13000, 7000 };
|
13000, 7000 };
|
||||||
static const u16 ms_sensors_ht_h_conversion_time[] = { 16000, 3000,
|
static const u16 ms_sensors_ht_h_conversion_time[] = { 16000, 5000,
|
||||||
5000, 8000 };
|
3000, 8000 };
|
||||||
static const u16 ms_sensors_tp_conversion_time[] = { 500, 1100, 2100,
|
static const u16 ms_sensors_tp_conversion_time[] = { 500, 1100, 2100,
|
||||||
4100, 8220, 16440 };
|
4100, 8220, 16440 };
|
||||||
|
|
||||||
|
|
|
@ -508,13 +508,13 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev,
|
||||||
ret = inv_mpu6050_sensor_show(st, st->reg->gyro_offset,
|
ret = inv_mpu6050_sensor_show(st, st->reg->gyro_offset,
|
||||||
chan->channel2, val);
|
chan->channel2, val);
|
||||||
mutex_unlock(&st->lock);
|
mutex_unlock(&st->lock);
|
||||||
return IIO_VAL_INT;
|
return ret;
|
||||||
case IIO_ACCEL:
|
case IIO_ACCEL:
|
||||||
mutex_lock(&st->lock);
|
mutex_lock(&st->lock);
|
||||||
ret = inv_mpu6050_sensor_show(st, st->reg->accl_offset,
|
ret = inv_mpu6050_sensor_show(st, st->reg->accl_offset,
|
||||||
chan->channel2, val);
|
chan->channel2, val);
|
||||||
mutex_unlock(&st->lock);
|
mutex_unlock(&st->lock);
|
||||||
return IIO_VAL_INT;
|
return ret;
|
||||||
|
|
||||||
default:
|
default:
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
|
@ -70,7 +70,7 @@ static char version[] =
|
||||||
BNXT_RE_DESC " v" ROCE_DRV_MODULE_VERSION "\n";
|
BNXT_RE_DESC " v" ROCE_DRV_MODULE_VERSION "\n";
|
||||||
|
|
||||||
MODULE_AUTHOR("Eddie Wai <eddie.wai@broadcom.com>");
|
MODULE_AUTHOR("Eddie Wai <eddie.wai@broadcom.com>");
|
||||||
MODULE_DESCRIPTION(BNXT_RE_DESC " Driver");
|
MODULE_DESCRIPTION(BNXT_RE_DESC);
|
||||||
MODULE_LICENSE("Dual BSD/GPL");
|
MODULE_LICENSE("Dual BSD/GPL");
|
||||||
|
|
||||||
/* globals */
|
/* globals */
|
||||||
|
|
|
@ -2945,6 +2945,9 @@ static enum i40iw_status_code i40iw_sc_alloc_stag(
|
||||||
u64 header;
|
u64 header;
|
||||||
enum i40iw_page_size page_size;
|
enum i40iw_page_size page_size;
|
||||||
|
|
||||||
|
if (!info->total_len && !info->all_memory)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
page_size = (info->page_size == 0x200000) ? I40IW_PAGE_SIZE_2M : I40IW_PAGE_SIZE_4K;
|
page_size = (info->page_size == 0x200000) ? I40IW_PAGE_SIZE_2M : I40IW_PAGE_SIZE_4K;
|
||||||
cqp = dev->cqp;
|
cqp = dev->cqp;
|
||||||
wqe = i40iw_sc_cqp_get_next_send_wqe(cqp, scratch);
|
wqe = i40iw_sc_cqp_get_next_send_wqe(cqp, scratch);
|
||||||
|
@ -3003,6 +3006,9 @@ static enum i40iw_status_code i40iw_sc_mr_reg_non_shared(
|
||||||
u8 addr_type;
|
u8 addr_type;
|
||||||
enum i40iw_page_size page_size;
|
enum i40iw_page_size page_size;
|
||||||
|
|
||||||
|
if (!info->total_len && !info->all_memory)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
page_size = (info->page_size == 0x200000) ? I40IW_PAGE_SIZE_2M : I40IW_PAGE_SIZE_4K;
|
page_size = (info->page_size == 0x200000) ? I40IW_PAGE_SIZE_2M : I40IW_PAGE_SIZE_4K;
|
||||||
if (info->access_rights & (I40IW_ACCESS_FLAGS_REMOTEREAD_ONLY |
|
if (info->access_rights & (I40IW_ACCESS_FLAGS_REMOTEREAD_ONLY |
|
||||||
I40IW_ACCESS_FLAGS_REMOTEWRITE_ONLY))
|
I40IW_ACCESS_FLAGS_REMOTEWRITE_ONLY))
|
||||||
|
|
|
@ -779,6 +779,7 @@ struct i40iw_allocate_stag_info {
|
||||||
bool use_hmc_fcn_index;
|
bool use_hmc_fcn_index;
|
||||||
u8 hmc_fcn_index;
|
u8 hmc_fcn_index;
|
||||||
bool use_pf_rid;
|
bool use_pf_rid;
|
||||||
|
bool all_memory;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct i40iw_reg_ns_stag_info {
|
struct i40iw_reg_ns_stag_info {
|
||||||
|
@ -797,6 +798,7 @@ struct i40iw_reg_ns_stag_info {
|
||||||
bool use_hmc_fcn_index;
|
bool use_hmc_fcn_index;
|
||||||
u8 hmc_fcn_index;
|
u8 hmc_fcn_index;
|
||||||
bool use_pf_rid;
|
bool use_pf_rid;
|
||||||
|
bool all_memory;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct i40iw_fast_reg_stag_info {
|
struct i40iw_fast_reg_stag_info {
|
||||||
|
|
|
@ -1581,7 +1581,8 @@ static int i40iw_handle_q_mem(struct i40iw_device *iwdev,
|
||||||
static int i40iw_hw_alloc_stag(struct i40iw_device *iwdev, struct i40iw_mr *iwmr)
|
static int i40iw_hw_alloc_stag(struct i40iw_device *iwdev, struct i40iw_mr *iwmr)
|
||||||
{
|
{
|
||||||
struct i40iw_allocate_stag_info *info;
|
struct i40iw_allocate_stag_info *info;
|
||||||
struct i40iw_pd *iwpd = to_iwpd(iwmr->ibmr.pd);
|
struct ib_pd *pd = iwmr->ibmr.pd;
|
||||||
|
struct i40iw_pd *iwpd = to_iwpd(pd);
|
||||||
enum i40iw_status_code status;
|
enum i40iw_status_code status;
|
||||||
int err = 0;
|
int err = 0;
|
||||||
struct i40iw_cqp_request *cqp_request;
|
struct i40iw_cqp_request *cqp_request;
|
||||||
|
@ -1598,6 +1599,7 @@ static int i40iw_hw_alloc_stag(struct i40iw_device *iwdev, struct i40iw_mr *iwmr
|
||||||
info->stag_idx = iwmr->stag >> I40IW_CQPSQ_STAG_IDX_SHIFT;
|
info->stag_idx = iwmr->stag >> I40IW_CQPSQ_STAG_IDX_SHIFT;
|
||||||
info->pd_id = iwpd->sc_pd.pd_id;
|
info->pd_id = iwpd->sc_pd.pd_id;
|
||||||
info->total_len = iwmr->length;
|
info->total_len = iwmr->length;
|
||||||
|
info->all_memory = pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY;
|
||||||
info->remote_access = true;
|
info->remote_access = true;
|
||||||
cqp_info->cqp_cmd = OP_ALLOC_STAG;
|
cqp_info->cqp_cmd = OP_ALLOC_STAG;
|
||||||
cqp_info->post_sq = 1;
|
cqp_info->post_sq = 1;
|
||||||
|
@ -1651,6 +1653,8 @@ static struct ib_mr *i40iw_alloc_mr(struct ib_pd *pd,
|
||||||
iwmr->type = IW_MEMREG_TYPE_MEM;
|
iwmr->type = IW_MEMREG_TYPE_MEM;
|
||||||
palloc = &iwpbl->pble_alloc;
|
palloc = &iwpbl->pble_alloc;
|
||||||
iwmr->page_cnt = max_num_sg;
|
iwmr->page_cnt = max_num_sg;
|
||||||
|
/* Use system PAGE_SIZE as the sg page sizes are unknown at this point */
|
||||||
|
iwmr->length = max_num_sg * PAGE_SIZE;
|
||||||
mutex_lock(&iwdev->pbl_mutex);
|
mutex_lock(&iwdev->pbl_mutex);
|
||||||
status = i40iw_get_pble(&iwdev->sc_dev, iwdev->pble_rsrc, palloc, iwmr->page_cnt);
|
status = i40iw_get_pble(&iwdev->sc_dev, iwdev->pble_rsrc, palloc, iwmr->page_cnt);
|
||||||
mutex_unlock(&iwdev->pbl_mutex);
|
mutex_unlock(&iwdev->pbl_mutex);
|
||||||
|
@ -1747,7 +1751,8 @@ static int i40iw_hwreg_mr(struct i40iw_device *iwdev,
|
||||||
{
|
{
|
||||||
struct i40iw_pbl *iwpbl = &iwmr->iwpbl;
|
struct i40iw_pbl *iwpbl = &iwmr->iwpbl;
|
||||||
struct i40iw_reg_ns_stag_info *stag_info;
|
struct i40iw_reg_ns_stag_info *stag_info;
|
||||||
struct i40iw_pd *iwpd = to_iwpd(iwmr->ibmr.pd);
|
struct ib_pd *pd = iwmr->ibmr.pd;
|
||||||
|
struct i40iw_pd *iwpd = to_iwpd(pd);
|
||||||
struct i40iw_pble_alloc *palloc = &iwpbl->pble_alloc;
|
struct i40iw_pble_alloc *palloc = &iwpbl->pble_alloc;
|
||||||
enum i40iw_status_code status;
|
enum i40iw_status_code status;
|
||||||
int err = 0;
|
int err = 0;
|
||||||
|
@ -1767,6 +1772,7 @@ static int i40iw_hwreg_mr(struct i40iw_device *iwdev,
|
||||||
stag_info->total_len = iwmr->length;
|
stag_info->total_len = iwmr->length;
|
||||||
stag_info->access_rights = access;
|
stag_info->access_rights = access;
|
||||||
stag_info->pd_id = iwpd->sc_pd.pd_id;
|
stag_info->pd_id = iwpd->sc_pd.pd_id;
|
||||||
|
stag_info->all_memory = pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY;
|
||||||
stag_info->addr_type = I40IW_ADDR_TYPE_VA_BASED;
|
stag_info->addr_type = I40IW_ADDR_TYPE_VA_BASED;
|
||||||
stag_info->page_size = iwmr->page_size;
|
stag_info->page_size = iwmr->page_size;
|
||||||
|
|
||||||
|
|
|
@ -190,15 +190,15 @@ isert_alloc_rx_descriptors(struct isert_conn *isert_conn)
|
||||||
rx_desc = isert_conn->rx_descs;
|
rx_desc = isert_conn->rx_descs;
|
||||||
|
|
||||||
for (i = 0; i < ISERT_QP_MAX_RECV_DTOS; i++, rx_desc++) {
|
for (i = 0; i < ISERT_QP_MAX_RECV_DTOS; i++, rx_desc++) {
|
||||||
dma_addr = ib_dma_map_single(ib_dev, (void *)rx_desc,
|
dma_addr = ib_dma_map_single(ib_dev, rx_desc->buf,
|
||||||
ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
|
ISER_RX_SIZE, DMA_FROM_DEVICE);
|
||||||
if (ib_dma_mapping_error(ib_dev, dma_addr))
|
if (ib_dma_mapping_error(ib_dev, dma_addr))
|
||||||
goto dma_map_fail;
|
goto dma_map_fail;
|
||||||
|
|
||||||
rx_desc->dma_addr = dma_addr;
|
rx_desc->dma_addr = dma_addr;
|
||||||
|
|
||||||
rx_sg = &rx_desc->rx_sg;
|
rx_sg = &rx_desc->rx_sg;
|
||||||
rx_sg->addr = rx_desc->dma_addr;
|
rx_sg->addr = rx_desc->dma_addr + isert_get_hdr_offset(rx_desc);
|
||||||
rx_sg->length = ISER_RX_PAYLOAD_SIZE;
|
rx_sg->length = ISER_RX_PAYLOAD_SIZE;
|
||||||
rx_sg->lkey = device->pd->local_dma_lkey;
|
rx_sg->lkey = device->pd->local_dma_lkey;
|
||||||
rx_desc->rx_cqe.done = isert_recv_done;
|
rx_desc->rx_cqe.done = isert_recv_done;
|
||||||
|
@ -210,7 +210,7 @@ dma_map_fail:
|
||||||
rx_desc = isert_conn->rx_descs;
|
rx_desc = isert_conn->rx_descs;
|
||||||
for (j = 0; j < i; j++, rx_desc++) {
|
for (j = 0; j < i; j++, rx_desc++) {
|
||||||
ib_dma_unmap_single(ib_dev, rx_desc->dma_addr,
|
ib_dma_unmap_single(ib_dev, rx_desc->dma_addr,
|
||||||
ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
|
ISER_RX_SIZE, DMA_FROM_DEVICE);
|
||||||
}
|
}
|
||||||
kfree(isert_conn->rx_descs);
|
kfree(isert_conn->rx_descs);
|
||||||
isert_conn->rx_descs = NULL;
|
isert_conn->rx_descs = NULL;
|
||||||
|
@ -231,7 +231,7 @@ isert_free_rx_descriptors(struct isert_conn *isert_conn)
|
||||||
rx_desc = isert_conn->rx_descs;
|
rx_desc = isert_conn->rx_descs;
|
||||||
for (i = 0; i < ISERT_QP_MAX_RECV_DTOS; i++, rx_desc++) {
|
for (i = 0; i < ISERT_QP_MAX_RECV_DTOS; i++, rx_desc++) {
|
||||||
ib_dma_unmap_single(ib_dev, rx_desc->dma_addr,
|
ib_dma_unmap_single(ib_dev, rx_desc->dma_addr,
|
||||||
ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
|
ISER_RX_SIZE, DMA_FROM_DEVICE);
|
||||||
}
|
}
|
||||||
|
|
||||||
kfree(isert_conn->rx_descs);
|
kfree(isert_conn->rx_descs);
|
||||||
|
@ -416,10 +416,9 @@ isert_free_login_buf(struct isert_conn *isert_conn)
|
||||||
ISER_RX_PAYLOAD_SIZE, DMA_TO_DEVICE);
|
ISER_RX_PAYLOAD_SIZE, DMA_TO_DEVICE);
|
||||||
kfree(isert_conn->login_rsp_buf);
|
kfree(isert_conn->login_rsp_buf);
|
||||||
|
|
||||||
ib_dma_unmap_single(ib_dev, isert_conn->login_req_dma,
|
ib_dma_unmap_single(ib_dev, isert_conn->login_desc->dma_addr,
|
||||||
ISER_RX_PAYLOAD_SIZE,
|
ISER_RX_SIZE, DMA_FROM_DEVICE);
|
||||||
DMA_FROM_DEVICE);
|
kfree(isert_conn->login_desc);
|
||||||
kfree(isert_conn->login_req_buf);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int
|
static int
|
||||||
|
@ -428,25 +427,25 @@ isert_alloc_login_buf(struct isert_conn *isert_conn,
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
isert_conn->login_req_buf = kzalloc(sizeof(*isert_conn->login_req_buf),
|
isert_conn->login_desc = kzalloc(sizeof(*isert_conn->login_desc),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!isert_conn->login_req_buf)
|
if (!isert_conn->login_desc)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
isert_conn->login_req_dma = ib_dma_map_single(ib_dev,
|
isert_conn->login_desc->dma_addr = ib_dma_map_single(ib_dev,
|
||||||
isert_conn->login_req_buf,
|
isert_conn->login_desc->buf,
|
||||||
ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
|
ISER_RX_SIZE, DMA_FROM_DEVICE);
|
||||||
ret = ib_dma_mapping_error(ib_dev, isert_conn->login_req_dma);
|
ret = ib_dma_mapping_error(ib_dev, isert_conn->login_desc->dma_addr);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
isert_err("login_req_dma mapping error: %d\n", ret);
|
isert_err("login_desc dma mapping error: %d\n", ret);
|
||||||
isert_conn->login_req_dma = 0;
|
isert_conn->login_desc->dma_addr = 0;
|
||||||
goto out_free_login_req_buf;
|
goto out_free_login_desc;
|
||||||
}
|
}
|
||||||
|
|
||||||
isert_conn->login_rsp_buf = kzalloc(ISER_RX_PAYLOAD_SIZE, GFP_KERNEL);
|
isert_conn->login_rsp_buf = kzalloc(ISER_RX_PAYLOAD_SIZE, GFP_KERNEL);
|
||||||
if (!isert_conn->login_rsp_buf) {
|
if (!isert_conn->login_rsp_buf) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
goto out_unmap_login_req_buf;
|
goto out_unmap_login_desc;
|
||||||
}
|
}
|
||||||
|
|
||||||
isert_conn->login_rsp_dma = ib_dma_map_single(ib_dev,
|
isert_conn->login_rsp_dma = ib_dma_map_single(ib_dev,
|
||||||
|
@ -463,11 +462,11 @@ isert_alloc_login_buf(struct isert_conn *isert_conn,
|
||||||
|
|
||||||
out_free_login_rsp_buf:
|
out_free_login_rsp_buf:
|
||||||
kfree(isert_conn->login_rsp_buf);
|
kfree(isert_conn->login_rsp_buf);
|
||||||
out_unmap_login_req_buf:
|
out_unmap_login_desc:
|
||||||
ib_dma_unmap_single(ib_dev, isert_conn->login_req_dma,
|
ib_dma_unmap_single(ib_dev, isert_conn->login_desc->dma_addr,
|
||||||
ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
|
ISER_RX_SIZE, DMA_FROM_DEVICE);
|
||||||
out_free_login_req_buf:
|
out_free_login_desc:
|
||||||
kfree(isert_conn->login_req_buf);
|
kfree(isert_conn->login_desc);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -586,7 +585,7 @@ isert_connect_release(struct isert_conn *isert_conn)
|
||||||
ib_destroy_qp(isert_conn->qp);
|
ib_destroy_qp(isert_conn->qp);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (isert_conn->login_req_buf)
|
if (isert_conn->login_desc)
|
||||||
isert_free_login_buf(isert_conn);
|
isert_free_login_buf(isert_conn);
|
||||||
|
|
||||||
isert_device_put(device);
|
isert_device_put(device);
|
||||||
|
@ -976,17 +975,18 @@ isert_login_post_recv(struct isert_conn *isert_conn)
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
memset(&sge, 0, sizeof(struct ib_sge));
|
memset(&sge, 0, sizeof(struct ib_sge));
|
||||||
sge.addr = isert_conn->login_req_dma;
|
sge.addr = isert_conn->login_desc->dma_addr +
|
||||||
|
isert_get_hdr_offset(isert_conn->login_desc);
|
||||||
sge.length = ISER_RX_PAYLOAD_SIZE;
|
sge.length = ISER_RX_PAYLOAD_SIZE;
|
||||||
sge.lkey = isert_conn->device->pd->local_dma_lkey;
|
sge.lkey = isert_conn->device->pd->local_dma_lkey;
|
||||||
|
|
||||||
isert_dbg("Setup sge: addr: %llx length: %d 0x%08x\n",
|
isert_dbg("Setup sge: addr: %llx length: %d 0x%08x\n",
|
||||||
sge.addr, sge.length, sge.lkey);
|
sge.addr, sge.length, sge.lkey);
|
||||||
|
|
||||||
isert_conn->login_req_buf->rx_cqe.done = isert_login_recv_done;
|
isert_conn->login_desc->rx_cqe.done = isert_login_recv_done;
|
||||||
|
|
||||||
memset(&rx_wr, 0, sizeof(struct ib_recv_wr));
|
memset(&rx_wr, 0, sizeof(struct ib_recv_wr));
|
||||||
rx_wr.wr_cqe = &isert_conn->login_req_buf->rx_cqe;
|
rx_wr.wr_cqe = &isert_conn->login_desc->rx_cqe;
|
||||||
rx_wr.sg_list = &sge;
|
rx_wr.sg_list = &sge;
|
||||||
rx_wr.num_sge = 1;
|
rx_wr.num_sge = 1;
|
||||||
|
|
||||||
|
@ -1063,7 +1063,7 @@ post_send:
|
||||||
static void
|
static void
|
||||||
isert_rx_login_req(struct isert_conn *isert_conn)
|
isert_rx_login_req(struct isert_conn *isert_conn)
|
||||||
{
|
{
|
||||||
struct iser_rx_desc *rx_desc = isert_conn->login_req_buf;
|
struct iser_rx_desc *rx_desc = isert_conn->login_desc;
|
||||||
int rx_buflen = isert_conn->login_req_len;
|
int rx_buflen = isert_conn->login_req_len;
|
||||||
struct iscsi_conn *conn = isert_conn->conn;
|
struct iscsi_conn *conn = isert_conn->conn;
|
||||||
struct iscsi_login *login = conn->conn_login;
|
struct iscsi_login *login = conn->conn_login;
|
||||||
|
@ -1075,7 +1075,7 @@ isert_rx_login_req(struct isert_conn *isert_conn)
|
||||||
|
|
||||||
if (login->first_request) {
|
if (login->first_request) {
|
||||||
struct iscsi_login_req *login_req =
|
struct iscsi_login_req *login_req =
|
||||||
(struct iscsi_login_req *)&rx_desc->iscsi_header;
|
(struct iscsi_login_req *)isert_get_iscsi_hdr(rx_desc);
|
||||||
/*
|
/*
|
||||||
* Setup the initial iscsi_login values from the leading
|
* Setup the initial iscsi_login values from the leading
|
||||||
* login request PDU.
|
* login request PDU.
|
||||||
|
@ -1094,13 +1094,13 @@ isert_rx_login_req(struct isert_conn *isert_conn)
|
||||||
login->tsih = be16_to_cpu(login_req->tsih);
|
login->tsih = be16_to_cpu(login_req->tsih);
|
||||||
}
|
}
|
||||||
|
|
||||||
memcpy(&login->req[0], (void *)&rx_desc->iscsi_header, ISCSI_HDR_LEN);
|
memcpy(&login->req[0], isert_get_iscsi_hdr(rx_desc), ISCSI_HDR_LEN);
|
||||||
|
|
||||||
size = min(rx_buflen, MAX_KEY_VALUE_PAIRS);
|
size = min(rx_buflen, MAX_KEY_VALUE_PAIRS);
|
||||||
isert_dbg("Using login payload size: %d, rx_buflen: %d "
|
isert_dbg("Using login payload size: %d, rx_buflen: %d "
|
||||||
"MAX_KEY_VALUE_PAIRS: %d\n", size, rx_buflen,
|
"MAX_KEY_VALUE_PAIRS: %d\n", size, rx_buflen,
|
||||||
MAX_KEY_VALUE_PAIRS);
|
MAX_KEY_VALUE_PAIRS);
|
||||||
memcpy(login->req_buf, &rx_desc->data[0], size);
|
memcpy(login->req_buf, isert_get_data(rx_desc), size);
|
||||||
|
|
||||||
if (login->first_request) {
|
if (login->first_request) {
|
||||||
complete(&isert_conn->login_comp);
|
complete(&isert_conn->login_comp);
|
||||||
|
@ -1165,14 +1165,15 @@ isert_handle_scsi_cmd(struct isert_conn *isert_conn,
|
||||||
if (imm_data_len != data_len) {
|
if (imm_data_len != data_len) {
|
||||||
sg_nents = max(1UL, DIV_ROUND_UP(imm_data_len, PAGE_SIZE));
|
sg_nents = max(1UL, DIV_ROUND_UP(imm_data_len, PAGE_SIZE));
|
||||||
sg_copy_from_buffer(cmd->se_cmd.t_data_sg, sg_nents,
|
sg_copy_from_buffer(cmd->se_cmd.t_data_sg, sg_nents,
|
||||||
&rx_desc->data[0], imm_data_len);
|
isert_get_data(rx_desc), imm_data_len);
|
||||||
isert_dbg("Copy Immediate sg_nents: %u imm_data_len: %d\n",
|
isert_dbg("Copy Immediate sg_nents: %u imm_data_len: %d\n",
|
||||||
sg_nents, imm_data_len);
|
sg_nents, imm_data_len);
|
||||||
} else {
|
} else {
|
||||||
sg_init_table(&isert_cmd->sg, 1);
|
sg_init_table(&isert_cmd->sg, 1);
|
||||||
cmd->se_cmd.t_data_sg = &isert_cmd->sg;
|
cmd->se_cmd.t_data_sg = &isert_cmd->sg;
|
||||||
cmd->se_cmd.t_data_nents = 1;
|
cmd->se_cmd.t_data_nents = 1;
|
||||||
sg_set_buf(&isert_cmd->sg, &rx_desc->data[0], imm_data_len);
|
sg_set_buf(&isert_cmd->sg, isert_get_data(rx_desc),
|
||||||
|
imm_data_len);
|
||||||
isert_dbg("Transfer Immediate imm_data_len: %d\n",
|
isert_dbg("Transfer Immediate imm_data_len: %d\n",
|
||||||
imm_data_len);
|
imm_data_len);
|
||||||
}
|
}
|
||||||
|
@ -1241,9 +1242,9 @@ isert_handle_iscsi_dataout(struct isert_conn *isert_conn,
|
||||||
}
|
}
|
||||||
isert_dbg("Copying DataOut: sg_start: %p, sg_off: %u "
|
isert_dbg("Copying DataOut: sg_start: %p, sg_off: %u "
|
||||||
"sg_nents: %u from %p %u\n", sg_start, sg_off,
|
"sg_nents: %u from %p %u\n", sg_start, sg_off,
|
||||||
sg_nents, &rx_desc->data[0], unsol_data_len);
|
sg_nents, isert_get_data(rx_desc), unsol_data_len);
|
||||||
|
|
||||||
sg_copy_from_buffer(sg_start, sg_nents, &rx_desc->data[0],
|
sg_copy_from_buffer(sg_start, sg_nents, isert_get_data(rx_desc),
|
||||||
unsol_data_len);
|
unsol_data_len);
|
||||||
|
|
||||||
rc = iscsit_check_dataout_payload(cmd, hdr, false);
|
rc = iscsit_check_dataout_payload(cmd, hdr, false);
|
||||||
|
@ -1302,7 +1303,7 @@ isert_handle_text_cmd(struct isert_conn *isert_conn, struct isert_cmd *isert_cmd
|
||||||
}
|
}
|
||||||
cmd->text_in_ptr = text_in;
|
cmd->text_in_ptr = text_in;
|
||||||
|
|
||||||
memcpy(cmd->text_in_ptr, &rx_desc->data[0], payload_length);
|
memcpy(cmd->text_in_ptr, isert_get_data(rx_desc), payload_length);
|
||||||
|
|
||||||
return iscsit_process_text_cmd(conn, cmd, hdr);
|
return iscsit_process_text_cmd(conn, cmd, hdr);
|
||||||
}
|
}
|
||||||
|
@ -1312,7 +1313,7 @@ isert_rx_opcode(struct isert_conn *isert_conn, struct iser_rx_desc *rx_desc,
|
||||||
uint32_t read_stag, uint64_t read_va,
|
uint32_t read_stag, uint64_t read_va,
|
||||||
uint32_t write_stag, uint64_t write_va)
|
uint32_t write_stag, uint64_t write_va)
|
||||||
{
|
{
|
||||||
struct iscsi_hdr *hdr = &rx_desc->iscsi_header;
|
struct iscsi_hdr *hdr = isert_get_iscsi_hdr(rx_desc);
|
||||||
struct iscsi_conn *conn = isert_conn->conn;
|
struct iscsi_conn *conn = isert_conn->conn;
|
||||||
struct iscsi_cmd *cmd;
|
struct iscsi_cmd *cmd;
|
||||||
struct isert_cmd *isert_cmd;
|
struct isert_cmd *isert_cmd;
|
||||||
|
@ -1410,8 +1411,8 @@ isert_recv_done(struct ib_cq *cq, struct ib_wc *wc)
|
||||||
struct isert_conn *isert_conn = wc->qp->qp_context;
|
struct isert_conn *isert_conn = wc->qp->qp_context;
|
||||||
struct ib_device *ib_dev = isert_conn->cm_id->device;
|
struct ib_device *ib_dev = isert_conn->cm_id->device;
|
||||||
struct iser_rx_desc *rx_desc = cqe_to_rx_desc(wc->wr_cqe);
|
struct iser_rx_desc *rx_desc = cqe_to_rx_desc(wc->wr_cqe);
|
||||||
struct iscsi_hdr *hdr = &rx_desc->iscsi_header;
|
struct iscsi_hdr *hdr = isert_get_iscsi_hdr(rx_desc);
|
||||||
struct iser_ctrl *iser_ctrl = &rx_desc->iser_header;
|
struct iser_ctrl *iser_ctrl = isert_get_iser_hdr(rx_desc);
|
||||||
uint64_t read_va = 0, write_va = 0;
|
uint64_t read_va = 0, write_va = 0;
|
||||||
uint32_t read_stag = 0, write_stag = 0;
|
uint32_t read_stag = 0, write_stag = 0;
|
||||||
|
|
||||||
|
@ -1425,7 +1426,7 @@ isert_recv_done(struct ib_cq *cq, struct ib_wc *wc)
|
||||||
rx_desc->in_use = true;
|
rx_desc->in_use = true;
|
||||||
|
|
||||||
ib_dma_sync_single_for_cpu(ib_dev, rx_desc->dma_addr,
|
ib_dma_sync_single_for_cpu(ib_dev, rx_desc->dma_addr,
|
||||||
ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
|
ISER_RX_SIZE, DMA_FROM_DEVICE);
|
||||||
|
|
||||||
isert_dbg("DMA: 0x%llx, iSCSI opcode: 0x%02x, ITT: 0x%08x, flags: 0x%02x dlen: %d\n",
|
isert_dbg("DMA: 0x%llx, iSCSI opcode: 0x%02x, ITT: 0x%08x, flags: 0x%02x dlen: %d\n",
|
||||||
rx_desc->dma_addr, hdr->opcode, hdr->itt, hdr->flags,
|
rx_desc->dma_addr, hdr->opcode, hdr->itt, hdr->flags,
|
||||||
|
@ -1460,7 +1461,7 @@ isert_recv_done(struct ib_cq *cq, struct ib_wc *wc)
|
||||||
read_stag, read_va, write_stag, write_va);
|
read_stag, read_va, write_stag, write_va);
|
||||||
|
|
||||||
ib_dma_sync_single_for_device(ib_dev, rx_desc->dma_addr,
|
ib_dma_sync_single_for_device(ib_dev, rx_desc->dma_addr,
|
||||||
ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
|
ISER_RX_SIZE, DMA_FROM_DEVICE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
|
@ -1474,8 +1475,8 @@ isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc)
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
ib_dma_sync_single_for_cpu(ib_dev, isert_conn->login_req_dma,
|
ib_dma_sync_single_for_cpu(ib_dev, isert_conn->login_desc->dma_addr,
|
||||||
ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
|
ISER_RX_SIZE, DMA_FROM_DEVICE);
|
||||||
|
|
||||||
isert_conn->login_req_len = wc->byte_len - ISER_HEADERS_LEN;
|
isert_conn->login_req_len = wc->byte_len - ISER_HEADERS_LEN;
|
||||||
|
|
||||||
|
@ -1490,8 +1491,8 @@ isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc)
|
||||||
complete(&isert_conn->login_req_comp);
|
complete(&isert_conn->login_req_comp);
|
||||||
mutex_unlock(&isert_conn->mutex);
|
mutex_unlock(&isert_conn->mutex);
|
||||||
|
|
||||||
ib_dma_sync_single_for_device(ib_dev, isert_conn->login_req_dma,
|
ib_dma_sync_single_for_device(ib_dev, isert_conn->login_desc->dma_addr,
|
||||||
ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
|
ISER_RX_SIZE, DMA_FROM_DEVICE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
|
|
|
@ -59,9 +59,11 @@
|
||||||
ISERT_MAX_TX_MISC_PDUS + \
|
ISERT_MAX_TX_MISC_PDUS + \
|
||||||
ISERT_MAX_RX_MISC_PDUS)
|
ISERT_MAX_RX_MISC_PDUS)
|
||||||
|
|
||||||
#define ISER_RX_PAD_SIZE (ISCSI_DEF_MAX_RECV_SEG_LEN + 4096 - \
|
/*
|
||||||
(ISER_RX_PAYLOAD_SIZE + sizeof(u64) + sizeof(struct ib_sge) + \
|
* RX size is default of 8k plus headers, but data needs to align to
|
||||||
sizeof(struct ib_cqe) + sizeof(bool)))
|
* 512 boundary, so use 1024 to have the extra space for alignment.
|
||||||
|
*/
|
||||||
|
#define ISER_RX_SIZE (ISCSI_DEF_MAX_RECV_SEG_LEN + 1024)
|
||||||
|
|
||||||
#define ISCSI_ISER_SG_TABLESIZE 256
|
#define ISCSI_ISER_SG_TABLESIZE 256
|
||||||
|
|
||||||
|
@ -80,21 +82,41 @@ enum iser_conn_state {
|
||||||
};
|
};
|
||||||
|
|
||||||
struct iser_rx_desc {
|
struct iser_rx_desc {
|
||||||
struct iser_ctrl iser_header;
|
char buf[ISER_RX_SIZE];
|
||||||
struct iscsi_hdr iscsi_header;
|
|
||||||
char data[ISCSI_DEF_MAX_RECV_SEG_LEN];
|
|
||||||
u64 dma_addr;
|
u64 dma_addr;
|
||||||
struct ib_sge rx_sg;
|
struct ib_sge rx_sg;
|
||||||
struct ib_cqe rx_cqe;
|
struct ib_cqe rx_cqe;
|
||||||
bool in_use;
|
bool in_use;
|
||||||
char pad[ISER_RX_PAD_SIZE];
|
};
|
||||||
} __packed;
|
|
||||||
|
|
||||||
static inline struct iser_rx_desc *cqe_to_rx_desc(struct ib_cqe *cqe)
|
static inline struct iser_rx_desc *cqe_to_rx_desc(struct ib_cqe *cqe)
|
||||||
{
|
{
|
||||||
return container_of(cqe, struct iser_rx_desc, rx_cqe);
|
return container_of(cqe, struct iser_rx_desc, rx_cqe);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void *isert_get_iser_hdr(struct iser_rx_desc *desc)
|
||||||
|
{
|
||||||
|
return PTR_ALIGN(desc->buf + ISER_HEADERS_LEN, 512) - ISER_HEADERS_LEN;
|
||||||
|
}
|
||||||
|
|
||||||
|
static size_t isert_get_hdr_offset(struct iser_rx_desc *desc)
|
||||||
|
{
|
||||||
|
return isert_get_iser_hdr(desc) - (void *)desc->buf;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void *isert_get_iscsi_hdr(struct iser_rx_desc *desc)
|
||||||
|
{
|
||||||
|
return isert_get_iser_hdr(desc) + sizeof(struct iser_ctrl);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void *isert_get_data(struct iser_rx_desc *desc)
|
||||||
|
{
|
||||||
|
void *data = isert_get_iser_hdr(desc) + ISER_HEADERS_LEN;
|
||||||
|
|
||||||
|
WARN_ON((uintptr_t)data & 511);
|
||||||
|
return data;
|
||||||
|
}
|
||||||
|
|
||||||
struct iser_tx_desc {
|
struct iser_tx_desc {
|
||||||
struct iser_ctrl iser_header;
|
struct iser_ctrl iser_header;
|
||||||
struct iscsi_hdr iscsi_header;
|
struct iscsi_hdr iscsi_header;
|
||||||
|
@ -141,9 +163,8 @@ struct isert_conn {
|
||||||
u32 responder_resources;
|
u32 responder_resources;
|
||||||
u32 initiator_depth;
|
u32 initiator_depth;
|
||||||
bool pi_support;
|
bool pi_support;
|
||||||
struct iser_rx_desc *login_req_buf;
|
struct iser_rx_desc *login_desc;
|
||||||
char *login_rsp_buf;
|
char *login_rsp_buf;
|
||||||
u64 login_req_dma;
|
|
||||||
int login_req_len;
|
int login_req_len;
|
||||||
u64 login_rsp_dma;
|
u64 login_rsp_dma;
|
||||||
struct iser_rx_desc *rx_descs;
|
struct iser_rx_desc *rx_descs;
|
||||||
|
|
|
@ -133,6 +133,7 @@ static const struct xpad_device {
|
||||||
{ 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
|
{ 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
|
||||||
{ 0x044f, 0x0f10, "Thrustmaster Modena GT Wheel", 0, XTYPE_XBOX },
|
{ 0x044f, 0x0f10, "Thrustmaster Modena GT Wheel", 0, XTYPE_XBOX },
|
||||||
{ 0x044f, 0xb326, "Thrustmaster Gamepad GP XID", 0, XTYPE_XBOX360 },
|
{ 0x044f, 0xb326, "Thrustmaster Gamepad GP XID", 0, XTYPE_XBOX360 },
|
||||||
|
{ 0x03f0, 0x0495, "HyperX Clutch Gladiate", 0, XTYPE_XBOXONE },
|
||||||
{ 0x045e, 0x0202, "Microsoft X-Box pad v1 (US)", 0, XTYPE_XBOX },
|
{ 0x045e, 0x0202, "Microsoft X-Box pad v1 (US)", 0, XTYPE_XBOX },
|
||||||
{ 0x045e, 0x0285, "Microsoft X-Box pad (Japan)", 0, XTYPE_XBOX },
|
{ 0x045e, 0x0285, "Microsoft X-Box pad (Japan)", 0, XTYPE_XBOX },
|
||||||
{ 0x045e, 0x0287, "Microsoft Xbox Controller S", 0, XTYPE_XBOX },
|
{ 0x045e, 0x0287, "Microsoft Xbox Controller S", 0, XTYPE_XBOX },
|
||||||
|
@ -445,6 +446,7 @@ static const struct usb_device_id xpad_table[] = {
|
||||||
XPAD_XBOX360_VENDOR(0x0079), /* GPD Win 2 Controller */
|
XPAD_XBOX360_VENDOR(0x0079), /* GPD Win 2 Controller */
|
||||||
XPAD_XBOX360_VENDOR(0x03eb), /* Wooting Keyboards (Legacy) */
|
XPAD_XBOX360_VENDOR(0x03eb), /* Wooting Keyboards (Legacy) */
|
||||||
XPAD_XBOX360_VENDOR(0x044f), /* Thrustmaster X-Box 360 controllers */
|
XPAD_XBOX360_VENDOR(0x044f), /* Thrustmaster X-Box 360 controllers */
|
||||||
|
XPAD_XBOXONE_VENDOR(0x03f0), /* HP HyperX Xbox One Controllers */
|
||||||
XPAD_XBOX360_VENDOR(0x045e), /* Microsoft X-Box 360 controllers */
|
XPAD_XBOX360_VENDOR(0x045e), /* Microsoft X-Box 360 controllers */
|
||||||
XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft X-Box One controllers */
|
XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft X-Box One controllers */
|
||||||
XPAD_XBOX360_VENDOR(0x046d), /* Logitech X-Box 360 style controllers */
|
XPAD_XBOX360_VENDOR(0x046d), /* Logitech X-Box 360 style controllers */
|
||||||
|
|
|
@ -108,6 +108,9 @@ static int micro_key_probe(struct platform_device *pdev)
|
||||||
keys->codes = devm_kmemdup(&pdev->dev, micro_keycodes,
|
keys->codes = devm_kmemdup(&pdev->dev, micro_keycodes,
|
||||||
keys->input->keycodesize * keys->input->keycodemax,
|
keys->input->keycodesize * keys->input->keycodemax,
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
|
if (!keys->codes)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
keys->input->keycode = keys->codes;
|
keys->input->keycode = keys->codes;
|
||||||
|
|
||||||
__set_bit(EV_KEY, keys->input->evbit);
|
__set_bit(EV_KEY, keys->input->evbit);
|
||||||
|
|
|
@ -265,6 +265,7 @@ struct bcache_device {
|
||||||
#define BCACHE_DEV_WB_RUNNING 3
|
#define BCACHE_DEV_WB_RUNNING 3
|
||||||
#define BCACHE_DEV_RATE_DW_RUNNING 4
|
#define BCACHE_DEV_RATE_DW_RUNNING 4
|
||||||
int nr_stripes;
|
int nr_stripes;
|
||||||
|
#define BCH_MIN_STRIPE_SZ ((4 << 20) >> SECTOR_SHIFT)
|
||||||
unsigned int stripe_size;
|
unsigned int stripe_size;
|
||||||
atomic_t *stripe_sectors_dirty;
|
atomic_t *stripe_sectors_dirty;
|
||||||
unsigned long *full_dirty_stripes;
|
unsigned long *full_dirty_stripes;
|
||||||
|
|
|
@ -1008,6 +1008,9 @@ err:
|
||||||
*
|
*
|
||||||
* The btree node will have either a read or a write lock held, depending on
|
* The btree node will have either a read or a write lock held, depending on
|
||||||
* level and op->lock.
|
* level and op->lock.
|
||||||
|
*
|
||||||
|
* Note: Only error code or btree pointer will be returned, it is unncessary
|
||||||
|
* for callers to check NULL pointer.
|
||||||
*/
|
*/
|
||||||
struct btree *bch_btree_node_get(struct cache_set *c, struct btree_op *op,
|
struct btree *bch_btree_node_get(struct cache_set *c, struct btree_op *op,
|
||||||
struct bkey *k, int level, bool write,
|
struct bkey *k, int level, bool write,
|
||||||
|
@ -1120,6 +1123,10 @@ retry:
|
||||||
mutex_unlock(&b->c->bucket_lock);
|
mutex_unlock(&b->c->bucket_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Only error code or btree pointer will be returned, it is unncessary for
|
||||||
|
* callers to check NULL pointer.
|
||||||
|
*/
|
||||||
struct btree *__bch_btree_node_alloc(struct cache_set *c, struct btree_op *op,
|
struct btree *__bch_btree_node_alloc(struct cache_set *c, struct btree_op *op,
|
||||||
int level, bool wait,
|
int level, bool wait,
|
||||||
struct btree *parent)
|
struct btree *parent)
|
||||||
|
@ -1379,7 +1386,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
|
||||||
memset(new_nodes, 0, sizeof(new_nodes));
|
memset(new_nodes, 0, sizeof(new_nodes));
|
||||||
closure_init_stack(&cl);
|
closure_init_stack(&cl);
|
||||||
|
|
||||||
while (nodes < GC_MERGE_NODES && !IS_ERR(r[nodes].b))
|
while (nodes < GC_MERGE_NODES && !IS_ERR_OR_NULL(r[nodes].b))
|
||||||
keys += r[nodes++].keys;
|
keys += r[nodes++].keys;
|
||||||
|
|
||||||
blocks = btree_default_blocks(b->c) * 2 / 3;
|
blocks = btree_default_blocks(b->c) * 2 / 3;
|
||||||
|
@ -1526,7 +1533,7 @@ out_nocoalesce:
|
||||||
atomic_dec(&b->c->prio_blocked);
|
atomic_dec(&b->c->prio_blocked);
|
||||||
|
|
||||||
for (i = 0; i < nodes; i++)
|
for (i = 0; i < nodes; i++)
|
||||||
if (!IS_ERR(new_nodes[i])) {
|
if (!IS_ERR_OR_NULL(new_nodes[i])) {
|
||||||
btree_node_free(new_nodes[i]);
|
btree_node_free(new_nodes[i]);
|
||||||
rw_unlock(true, new_nodes[i]);
|
rw_unlock(true, new_nodes[i]);
|
||||||
}
|
}
|
||||||
|
@ -1543,6 +1550,8 @@ static int btree_gc_rewrite_node(struct btree *b, struct btree_op *op,
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
n = btree_node_alloc_replacement(replace, NULL);
|
n = btree_node_alloc_replacement(replace, NULL);
|
||||||
|
if (IS_ERR(n))
|
||||||
|
return 0;
|
||||||
|
|
||||||
/* recheck reserve after allocating replacement node */
|
/* recheck reserve after allocating replacement node */
|
||||||
if (btree_check_reserve(b, NULL)) {
|
if (btree_check_reserve(b, NULL)) {
|
||||||
|
|
|
@ -807,6 +807,8 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
|
||||||
|
|
||||||
if (!d->stripe_size)
|
if (!d->stripe_size)
|
||||||
d->stripe_size = 1 << 31;
|
d->stripe_size = 1 << 31;
|
||||||
|
else if (d->stripe_size < BCH_MIN_STRIPE_SZ)
|
||||||
|
d->stripe_size = roundup(BCH_MIN_STRIPE_SZ, d->stripe_size);
|
||||||
|
|
||||||
d->nr_stripes = DIV_ROUND_UP_ULL(sectors, d->stripe_size);
|
d->nr_stripes = DIV_ROUND_UP_ULL(sectors, d->stripe_size);
|
||||||
|
|
||||||
|
@ -1844,7 +1846,7 @@ static int run_cache_set(struct cache_set *c)
|
||||||
c->root = bch_btree_node_get(c, NULL, k,
|
c->root = bch_btree_node_get(c, NULL, k,
|
||||||
j->btree_level,
|
j->btree_level,
|
||||||
true, NULL);
|
true, NULL);
|
||||||
if (IS_ERR_OR_NULL(c->root))
|
if (IS_ERR(c->root))
|
||||||
goto err;
|
goto err;
|
||||||
|
|
||||||
list_del_init(&c->root->list);
|
list_del_init(&c->root->list);
|
||||||
|
|
|
@ -992,7 +992,7 @@ SHOW(__bch_cache)
|
||||||
sum += INITIAL_PRIO - cached[i];
|
sum += INITIAL_PRIO - cached[i];
|
||||||
|
|
||||||
if (n)
|
if (n)
|
||||||
do_div(sum, n);
|
sum = div64_u64(sum, n);
|
||||||
|
|
||||||
for (i = 0; i < ARRAY_SIZE(q); i++)
|
for (i = 0; i < ARRAY_SIZE(q); i++)
|
||||||
q[i] = INITIAL_PRIO - cached[n * (i + 1) /
|
q[i] = INITIAL_PRIO - cached[n * (i + 1) /
|
||||||
|
|
|
@ -30,7 +30,7 @@ struct delay_c {
|
||||||
struct workqueue_struct *kdelayd_wq;
|
struct workqueue_struct *kdelayd_wq;
|
||||||
struct work_struct flush_expired_bios;
|
struct work_struct flush_expired_bios;
|
||||||
struct list_head delayed_bios;
|
struct list_head delayed_bios;
|
||||||
atomic_t may_delay;
|
bool may_delay;
|
||||||
|
|
||||||
struct delay_class read;
|
struct delay_class read;
|
||||||
struct delay_class write;
|
struct delay_class write;
|
||||||
|
@ -191,7 +191,7 @@ static int delay_ctr(struct dm_target *ti, unsigned int argc, char **argv)
|
||||||
INIT_WORK(&dc->flush_expired_bios, flush_expired_bios);
|
INIT_WORK(&dc->flush_expired_bios, flush_expired_bios);
|
||||||
INIT_LIST_HEAD(&dc->delayed_bios);
|
INIT_LIST_HEAD(&dc->delayed_bios);
|
||||||
mutex_init(&dc->timer_lock);
|
mutex_init(&dc->timer_lock);
|
||||||
atomic_set(&dc->may_delay, 1);
|
dc->may_delay = true;
|
||||||
dc->argc = argc;
|
dc->argc = argc;
|
||||||
|
|
||||||
ret = delay_class_ctr(ti, &dc->read, argv);
|
ret = delay_class_ctr(ti, &dc->read, argv);
|
||||||
|
@ -245,7 +245,7 @@ static int delay_bio(struct delay_c *dc, struct delay_class *c, struct bio *bio)
|
||||||
struct dm_delay_info *delayed;
|
struct dm_delay_info *delayed;
|
||||||
unsigned long expires = 0;
|
unsigned long expires = 0;
|
||||||
|
|
||||||
if (!c->delay || !atomic_read(&dc->may_delay))
|
if (!c->delay)
|
||||||
return DM_MAPIO_REMAPPED;
|
return DM_MAPIO_REMAPPED;
|
||||||
|
|
||||||
delayed = dm_per_bio_data(bio, sizeof(struct dm_delay_info));
|
delayed = dm_per_bio_data(bio, sizeof(struct dm_delay_info));
|
||||||
|
@ -254,6 +254,10 @@ static int delay_bio(struct delay_c *dc, struct delay_class *c, struct bio *bio)
|
||||||
delayed->expires = expires = jiffies + msecs_to_jiffies(c->delay);
|
delayed->expires = expires = jiffies + msecs_to_jiffies(c->delay);
|
||||||
|
|
||||||
mutex_lock(&delayed_bios_lock);
|
mutex_lock(&delayed_bios_lock);
|
||||||
|
if (unlikely(!dc->may_delay)) {
|
||||||
|
mutex_unlock(&delayed_bios_lock);
|
||||||
|
return DM_MAPIO_REMAPPED;
|
||||||
|
}
|
||||||
c->ops++;
|
c->ops++;
|
||||||
list_add_tail(&delayed->list, &dc->delayed_bios);
|
list_add_tail(&delayed->list, &dc->delayed_bios);
|
||||||
mutex_unlock(&delayed_bios_lock);
|
mutex_unlock(&delayed_bios_lock);
|
||||||
|
@ -267,7 +271,10 @@ static void delay_presuspend(struct dm_target *ti)
|
||||||
{
|
{
|
||||||
struct delay_c *dc = ti->private;
|
struct delay_c *dc = ti->private;
|
||||||
|
|
||||||
atomic_set(&dc->may_delay, 0);
|
mutex_lock(&delayed_bios_lock);
|
||||||
|
dc->may_delay = false;
|
||||||
|
mutex_unlock(&delayed_bios_lock);
|
||||||
|
|
||||||
del_timer_sync(&dc->delay_timer);
|
del_timer_sync(&dc->delay_timer);
|
||||||
flush_bios(flush_delayed_bios(dc, 1));
|
flush_bios(flush_delayed_bios(dc, 1));
|
||||||
}
|
}
|
||||||
|
@ -276,7 +283,7 @@ static void delay_resume(struct dm_target *ti)
|
||||||
{
|
{
|
||||||
struct delay_c *dc = ti->private;
|
struct delay_c *dc = ti->private;
|
||||||
|
|
||||||
atomic_set(&dc->may_delay, 1);
|
dc->may_delay = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int delay_map(struct dm_target *ti, struct bio *bio)
|
static int delay_map(struct dm_target *ti, struct bio *bio)
|
||||||
|
|
|
@ -1390,11 +1390,12 @@ static void integrity_metadata(struct work_struct *w)
|
||||||
}
|
}
|
||||||
|
|
||||||
__bio_for_each_segment(bv, bio, iter, dio->bio_details.bi_iter) {
|
__bio_for_each_segment(bv, bio, iter, dio->bio_details.bi_iter) {
|
||||||
|
struct bio_vec bv_copy = bv;
|
||||||
unsigned pos;
|
unsigned pos;
|
||||||
char *mem, *checksums_ptr;
|
char *mem, *checksums_ptr;
|
||||||
|
|
||||||
again:
|
again:
|
||||||
mem = (char *)kmap_atomic(bv.bv_page) + bv.bv_offset;
|
mem = (char *)kmap_atomic(bv_copy.bv_page) + bv_copy.bv_offset;
|
||||||
pos = 0;
|
pos = 0;
|
||||||
checksums_ptr = checksums;
|
checksums_ptr = checksums;
|
||||||
do {
|
do {
|
||||||
|
@ -1403,7 +1404,7 @@ again:
|
||||||
sectors_to_process -= ic->sectors_per_block;
|
sectors_to_process -= ic->sectors_per_block;
|
||||||
pos += ic->sectors_per_block << SECTOR_SHIFT;
|
pos += ic->sectors_per_block << SECTOR_SHIFT;
|
||||||
sector += ic->sectors_per_block;
|
sector += ic->sectors_per_block;
|
||||||
} while (pos < bv.bv_len && sectors_to_process && checksums != checksums_onstack);
|
} while (pos < bv_copy.bv_len && sectors_to_process && checksums != checksums_onstack);
|
||||||
kunmap_atomic(mem);
|
kunmap_atomic(mem);
|
||||||
|
|
||||||
r = dm_integrity_rw_tag(ic, checksums, &dio->metadata_block, &dio->metadata_offset,
|
r = dm_integrity_rw_tag(ic, checksums, &dio->metadata_block, &dio->metadata_offset,
|
||||||
|
@ -1423,9 +1424,9 @@ again:
|
||||||
if (!sectors_to_process)
|
if (!sectors_to_process)
|
||||||
break;
|
break;
|
||||||
|
|
||||||
if (unlikely(pos < bv.bv_len)) {
|
if (unlikely(pos < bv_copy.bv_len)) {
|
||||||
bv.bv_offset += pos;
|
bv_copy.bv_offset += pos;
|
||||||
bv.bv_len -= pos;
|
bv_copy.bv_len -= pos;
|
||||||
goto again;
|
goto again;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -29,7 +29,8 @@ bool verity_fec_is_enabled(struct dm_verity *v)
|
||||||
*/
|
*/
|
||||||
static inline struct dm_verity_fec_io *fec_io(struct dm_verity_io *io)
|
static inline struct dm_verity_fec_io *fec_io(struct dm_verity_io *io)
|
||||||
{
|
{
|
||||||
return (struct dm_verity_fec_io *) verity_io_digest_end(io->v, io);
|
return (struct dm_verity_fec_io *)
|
||||||
|
((char *)io + io->v->ti->per_io_data_size - sizeof(struct dm_verity_fec_io));
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -583,7 +583,9 @@ static void verity_end_io(struct bio *bio)
|
||||||
struct dm_verity_io *io = bio->bi_private;
|
struct dm_verity_io *io = bio->bi_private;
|
||||||
|
|
||||||
if (bio->bi_status &&
|
if (bio->bi_status &&
|
||||||
(!verity_fec_is_enabled(io->v) || verity_is_system_shutting_down())) {
|
(!verity_fec_is_enabled(io->v) ||
|
||||||
|
verity_is_system_shutting_down() ||
|
||||||
|
(bio->bi_opf & REQ_RAHEAD))) {
|
||||||
verity_finish_io(io, bio->bi_status);
|
verity_finish_io(io, bio->bi_status);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
|
@ -109,12 +109,6 @@ static inline u8 *verity_io_want_digest(struct dm_verity *v,
|
||||||
return (u8 *)(io + 1) + v->ahash_reqsize + v->digest_size;
|
return (u8 *)(io + 1) + v->ahash_reqsize + v->digest_size;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline u8 *verity_io_digest_end(struct dm_verity *v,
|
|
||||||
struct dm_verity_io *io)
|
|
||||||
{
|
|
||||||
return verity_io_want_digest(v, io) + v->digest_size;
|
|
||||||
}
|
|
||||||
|
|
||||||
extern int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io,
|
extern int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io,
|
||||||
struct bvec_iter *iter,
|
struct bvec_iter *iter,
|
||||||
int (*process)(struct dm_verity *v,
|
int (*process)(struct dm_verity *v,
|
||||||
|
|
|
@ -1489,8 +1489,6 @@ probe_out:
|
||||||
/* Unregister video device */
|
/* Unregister video device */
|
||||||
video_unregister_device(&ch->video_dev);
|
video_unregister_device(&ch->video_dev);
|
||||||
}
|
}
|
||||||
kfree(vpif_obj.sd);
|
|
||||||
v4l2_device_unregister(&vpif_obj.v4l2_dev);
|
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1495,6 +1495,8 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
|
||||||
blk_mq_requeue_request(req, true);
|
blk_mq_requeue_request(req, true);
|
||||||
else
|
else
|
||||||
__blk_mq_end_request(req, BLK_STS_OK);
|
__blk_mq_end_request(req, BLK_STS_OK);
|
||||||
|
} else if (mq->in_recovery) {
|
||||||
|
blk_mq_requeue_request(req, true);
|
||||||
} else {
|
} else {
|
||||||
blk_mq_end_request(req, BLK_STS_OK);
|
blk_mq_end_request(req, BLK_STS_OK);
|
||||||
}
|
}
|
||||||
|
|
|
@ -1454,22 +1454,27 @@ int mmc_cqe_recovery(struct mmc_host *host)
|
||||||
host->cqe_ops->cqe_recovery_start(host);
|
host->cqe_ops->cqe_recovery_start(host);
|
||||||
|
|
||||||
memset(&cmd, 0, sizeof(cmd));
|
memset(&cmd, 0, sizeof(cmd));
|
||||||
cmd.opcode = MMC_STOP_TRANSMISSION,
|
cmd.opcode = MMC_STOP_TRANSMISSION;
|
||||||
cmd.flags = MMC_RSP_R1B | MMC_CMD_AC,
|
cmd.flags = MMC_RSP_R1B | MMC_CMD_AC;
|
||||||
cmd.flags &= ~MMC_RSP_CRC; /* Ignore CRC */
|
cmd.flags &= ~MMC_RSP_CRC; /* Ignore CRC */
|
||||||
cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT,
|
cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT;
|
||||||
mmc_wait_for_cmd(host, &cmd, 0);
|
mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES);
|
||||||
|
|
||||||
|
mmc_poll_for_busy(host->card, MMC_CQE_RECOVERY_TIMEOUT, true, true);
|
||||||
|
|
||||||
memset(&cmd, 0, sizeof(cmd));
|
memset(&cmd, 0, sizeof(cmd));
|
||||||
cmd.opcode = MMC_CMDQ_TASK_MGMT;
|
cmd.opcode = MMC_CMDQ_TASK_MGMT;
|
||||||
cmd.arg = 1; /* Discard entire queue */
|
cmd.arg = 1; /* Discard entire queue */
|
||||||
cmd.flags = MMC_RSP_R1B | MMC_CMD_AC;
|
cmd.flags = MMC_RSP_R1B | MMC_CMD_AC;
|
||||||
cmd.flags &= ~MMC_RSP_CRC; /* Ignore CRC */
|
cmd.flags &= ~MMC_RSP_CRC; /* Ignore CRC */
|
||||||
cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT,
|
cmd.busy_timeout = MMC_CQE_RECOVERY_TIMEOUT;
|
||||||
err = mmc_wait_for_cmd(host, &cmd, 0);
|
err = mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES);
|
||||||
|
|
||||||
host->cqe_ops->cqe_recovery_finish(host);
|
host->cqe_ops->cqe_recovery_finish(host);
|
||||||
|
|
||||||
|
if (err)
|
||||||
|
err = mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES);
|
||||||
|
|
||||||
mmc_retune_release(host);
|
mmc_retune_release(host);
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
|
|
|
@ -460,8 +460,8 @@ int mmc_switch_status(struct mmc_card *card)
|
||||||
return __mmc_switch_status(card, true);
|
return __mmc_switch_status(card, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
|
int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
|
||||||
bool send_status, bool retry_crc_err)
|
bool send_status, bool retry_crc_err)
|
||||||
{
|
{
|
||||||
struct mmc_host *host = card->host;
|
struct mmc_host *host = card->host;
|
||||||
int err;
|
int err;
|
||||||
|
@ -514,6 +514,7 @@ static int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
EXPORT_SYMBOL_GPL(mmc_poll_for_busy);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* __mmc_switch - modify EXT_CSD register
|
* __mmc_switch - modify EXT_CSD register
|
||||||
|
|
|
@ -35,6 +35,8 @@ int mmc_can_ext_csd(struct mmc_card *card);
|
||||||
int mmc_get_ext_csd(struct mmc_card *card, u8 **new_ext_csd);
|
int mmc_get_ext_csd(struct mmc_card *card, u8 **new_ext_csd);
|
||||||
int mmc_switch_status(struct mmc_card *card);
|
int mmc_switch_status(struct mmc_card *card);
|
||||||
int __mmc_switch_status(struct mmc_card *card, bool crc_err_fatal);
|
int __mmc_switch_status(struct mmc_card *card, bool crc_err_fatal);
|
||||||
|
int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
|
||||||
|
bool send_status, bool retry_crc_err);
|
||||||
int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
|
int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
|
||||||
unsigned int timeout_ms, unsigned char timing,
|
unsigned int timeout_ms, unsigned char timing,
|
||||||
bool use_busy_signal, bool send_status, bool retry_crc_err);
|
bool use_busy_signal, bool send_status, bool retry_crc_err);
|
||||||
|
|
|
@ -1059,8 +1059,8 @@ static bool cqhci_clear_all_tasks(struct mmc_host *mmc, unsigned int timeout)
|
||||||
ret = cqhci_tasks_cleared(cq_host);
|
ret = cqhci_tasks_cleared(cq_host);
|
||||||
|
|
||||||
if (!ret)
|
if (!ret)
|
||||||
pr_debug("%s: cqhci: Failed to clear tasks\n",
|
pr_warn("%s: cqhci: Failed to clear tasks\n",
|
||||||
mmc_hostname(mmc));
|
mmc_hostname(mmc));
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
@ -1095,7 +1095,7 @@ static bool cqhci_halt(struct mmc_host *mmc, unsigned int timeout)
|
||||||
ret = cqhci_halted(cq_host);
|
ret = cqhci_halted(cq_host);
|
||||||
|
|
||||||
if (!ret)
|
if (!ret)
|
||||||
pr_err("%s: cqhci: Failed to halt\n", mmc_hostname(mmc));
|
pr_warn("%s: cqhci: Failed to halt\n", mmc_hostname(mmc));
|
||||||
|
|
||||||
mmc_log_string(mmc, "halt done with ret %d\n", ret);
|
mmc_log_string(mmc, "halt done with ret %d\n", ret);
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -1104,8 +1104,8 @@ static bool cqhci_halt(struct mmc_host *mmc, unsigned int timeout)
|
||||||
/*
|
/*
|
||||||
* After halting we expect to be able to use the command line. We interpret the
|
* After halting we expect to be able to use the command line. We interpret the
|
||||||
* failure to halt to mean the data lines might still be in use (and the upper
|
* failure to halt to mean the data lines might still be in use (and the upper
|
||||||
* layers will need to send a STOP command), so we set the timeout based on a
|
* layers will need to send a STOP command), however failing to halt complicates
|
||||||
* generous command timeout.
|
* the recovery, so set a timeout that would reasonably allow I/O to complete.
|
||||||
*/
|
*/
|
||||||
#define CQHCI_START_HALT_TIMEOUT 5000
|
#define CQHCI_START_HALT_TIMEOUT 5000
|
||||||
|
|
||||||
|
@ -1197,28 +1197,28 @@ static void cqhci_recovery_finish(struct mmc_host *mmc)
|
||||||
|
|
||||||
ok = cqhci_halt(mmc, CQHCI_FINISH_HALT_TIMEOUT);
|
ok = cqhci_halt(mmc, CQHCI_FINISH_HALT_TIMEOUT);
|
||||||
|
|
||||||
if (!cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT))
|
|
||||||
ok = false;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The specification contradicts itself, by saying that tasks cannot be
|
* The specification contradicts itself, by saying that tasks cannot be
|
||||||
* cleared if CQHCI does not halt, but if CQHCI does not halt, it should
|
* cleared if CQHCI does not halt, but if CQHCI does not halt, it should
|
||||||
* be disabled/re-enabled, but not to disable before clearing tasks.
|
* be disabled/re-enabled, but not to disable before clearing tasks.
|
||||||
* Have a go anyway.
|
* Have a go anyway.
|
||||||
*/
|
*/
|
||||||
if (!ok) {
|
if (!cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT))
|
||||||
pr_debug("%s: cqhci: disable / re-enable\n", mmc_hostname(mmc));
|
ok = false;
|
||||||
cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
|
|
||||||
cqcfg &= ~CQHCI_ENABLE;
|
/* Disable to make sure tasks really are cleared */
|
||||||
cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
|
cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
|
||||||
cqcfg |= CQHCI_ENABLE;
|
cqcfg &= ~CQHCI_ENABLE;
|
||||||
cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
|
cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
|
||||||
/* Be sure that there are no tasks */
|
|
||||||
ok = cqhci_halt(mmc, CQHCI_FINISH_HALT_TIMEOUT);
|
cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
|
||||||
if (!cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT))
|
cqcfg |= CQHCI_ENABLE;
|
||||||
ok = false;
|
cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
|
||||||
WARN_ON(!ok);
|
|
||||||
}
|
cqhci_halt(mmc, CQHCI_FINISH_HALT_TIMEOUT);
|
||||||
|
|
||||||
|
if (!ok)
|
||||||
|
cqhci_clear_all_tasks(mmc, CQHCI_CLEAR_TIMEOUT);
|
||||||
|
|
||||||
cqhci_recover_mrqs(cq_host);
|
cqhci_recover_mrqs(cq_host);
|
||||||
|
|
||||||
|
|
|
@ -420,8 +420,25 @@ read_pri_intelext(struct map_info *map, __u16 adr)
|
||||||
extra_size = 0;
|
extra_size = 0;
|
||||||
|
|
||||||
/* Protection Register info */
|
/* Protection Register info */
|
||||||
extra_size += (extp->NumProtectionFields - 1) *
|
if (extp->NumProtectionFields) {
|
||||||
sizeof(struct cfi_intelext_otpinfo);
|
struct cfi_intelext_otpinfo *otp =
|
||||||
|
(struct cfi_intelext_otpinfo *)&extp->extra[0];
|
||||||
|
|
||||||
|
extra_size += (extp->NumProtectionFields - 1) *
|
||||||
|
sizeof(struct cfi_intelext_otpinfo);
|
||||||
|
|
||||||
|
if (extp_size >= sizeof(*extp) + extra_size) {
|
||||||
|
int i;
|
||||||
|
|
||||||
|
/* Do some byteswapping if necessary */
|
||||||
|
for (i = 0; i < extp->NumProtectionFields - 1; i++) {
|
||||||
|
otp->ProtRegAddr = le32_to_cpu(otp->ProtRegAddr);
|
||||||
|
otp->FactGroups = le16_to_cpu(otp->FactGroups);
|
||||||
|
otp->UserGroups = le16_to_cpu(otp->UserGroups);
|
||||||
|
otp++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (extp->MinorVersion >= '1') {
|
if (extp->MinorVersion >= '1') {
|
||||||
|
@ -695,14 +712,16 @@ static int cfi_intelext_partition_fixup(struct mtd_info *mtd,
|
||||||
*/
|
*/
|
||||||
if (extp && extp->MajorVersion == '1' && extp->MinorVersion >= '3'
|
if (extp && extp->MajorVersion == '1' && extp->MinorVersion >= '3'
|
||||||
&& extp->FeatureSupport & (1 << 9)) {
|
&& extp->FeatureSupport & (1 << 9)) {
|
||||||
|
int offs = 0;
|
||||||
struct cfi_private *newcfi;
|
struct cfi_private *newcfi;
|
||||||
struct flchip *chip;
|
struct flchip *chip;
|
||||||
struct flchip_shared *shared;
|
struct flchip_shared *shared;
|
||||||
int offs, numregions, numparts, partshift, numvirtchips, i, j;
|
int numregions, numparts, partshift, numvirtchips, i, j;
|
||||||
|
|
||||||
/* Protection Register info */
|
/* Protection Register info */
|
||||||
offs = (extp->NumProtectionFields - 1) *
|
if (extp->NumProtectionFields)
|
||||||
sizeof(struct cfi_intelext_otpinfo);
|
offs = (extp->NumProtectionFields - 1) *
|
||||||
|
sizeof(struct cfi_intelext_otpinfo);
|
||||||
|
|
||||||
/* Burst Read info */
|
/* Burst Read info */
|
||||||
offs += extp->extra[offs+1]+2;
|
offs += extp->extra[offs+1]+2;
|
||||||
|
|
|
@ -1753,6 +1753,7 @@ static int brcmstb_nand_verify_erased_page(struct mtd_info *mtd,
|
||||||
int bitflips = 0;
|
int bitflips = 0;
|
||||||
int page = addr >> chip->page_shift;
|
int page = addr >> chip->page_shift;
|
||||||
int ret;
|
int ret;
|
||||||
|
void *ecc_chunk;
|
||||||
|
|
||||||
if (!buf) {
|
if (!buf) {
|
||||||
buf = chip->data_buf;
|
buf = chip->data_buf;
|
||||||
|
@ -1768,7 +1769,9 @@ static int brcmstb_nand_verify_erased_page(struct mtd_info *mtd,
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
for (i = 0; i < chip->ecc.steps; i++, oob += sas) {
|
for (i = 0; i < chip->ecc.steps; i++, oob += sas) {
|
||||||
ret = nand_check_erased_ecc_chunk(buf, chip->ecc.size,
|
ecc_chunk = buf + chip->ecc.size * i;
|
||||||
|
ret = nand_check_erased_ecc_chunk(ecc_chunk,
|
||||||
|
chip->ecc.size,
|
||||||
oob, sas, NULL, 0,
|
oob, sas, NULL, 0,
|
||||||
chip->ecc.strength);
|
chip->ecc.strength);
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
|
|
|
@ -332,7 +332,7 @@ static int __init arc_rimi_init(void)
|
||||||
dev->irq = 9;
|
dev->irq = 9;
|
||||||
|
|
||||||
if (arcrimi_probe(dev)) {
|
if (arcrimi_probe(dev)) {
|
||||||
free_netdev(dev);
|
free_arcdev(dev);
|
||||||
return -EIO;
|
return -EIO;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -349,7 +349,7 @@ static void __exit arc_rimi_exit(void)
|
||||||
iounmap(lp->mem_start);
|
iounmap(lp->mem_start);
|
||||||
release_mem_region(dev->mem_start, dev->mem_end - dev->mem_start + 1);
|
release_mem_region(dev->mem_start, dev->mem_end - dev->mem_start + 1);
|
||||||
free_irq(dev->irq, dev);
|
free_irq(dev->irq, dev);
|
||||||
free_netdev(dev);
|
free_arcdev(dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifndef MODULE
|
#ifndef MODULE
|
||||||
|
|
|
@ -191,6 +191,8 @@ do { \
|
||||||
#define ARC_IS_5MBIT 1 /* card default speed is 5MBit */
|
#define ARC_IS_5MBIT 1 /* card default speed is 5MBit */
|
||||||
#define ARC_CAN_10MBIT 2 /* card uses COM20022, supporting 10MBit,
|
#define ARC_CAN_10MBIT 2 /* card uses COM20022, supporting 10MBit,
|
||||||
but default is 2.5MBit. */
|
but default is 2.5MBit. */
|
||||||
|
#define ARC_HAS_LED 4 /* card has software controlled LEDs */
|
||||||
|
#define ARC_HAS_ROTARY 8 /* card has rotary encoder */
|
||||||
|
|
||||||
/* information needed to define an encapsulation driver */
|
/* information needed to define an encapsulation driver */
|
||||||
struct ArcProto {
|
struct ArcProto {
|
||||||
|
@ -303,6 +305,10 @@ struct arcnet_local {
|
||||||
|
|
||||||
int excnak_pending; /* We just got an excesive nak interrupt */
|
int excnak_pending; /* We just got an excesive nak interrupt */
|
||||||
|
|
||||||
|
/* RESET flag handling */
|
||||||
|
int reset_in_progress;
|
||||||
|
struct work_struct reset_work;
|
||||||
|
|
||||||
struct {
|
struct {
|
||||||
uint16_t sequence; /* sequence number (incs with each packet) */
|
uint16_t sequence; /* sequence number (incs with each packet) */
|
||||||
__be16 aborted_seq;
|
__be16 aborted_seq;
|
||||||
|
@ -355,7 +361,9 @@ void arcnet_dump_skb(struct net_device *dev, struct sk_buff *skb, char *desc)
|
||||||
|
|
||||||
void arcnet_unregister_proto(struct ArcProto *proto);
|
void arcnet_unregister_proto(struct ArcProto *proto);
|
||||||
irqreturn_t arcnet_interrupt(int irq, void *dev_id);
|
irqreturn_t arcnet_interrupt(int irq, void *dev_id);
|
||||||
|
|
||||||
struct net_device *alloc_arcdev(const char *name);
|
struct net_device *alloc_arcdev(const char *name);
|
||||||
|
void free_arcdev(struct net_device *dev);
|
||||||
|
|
||||||
int arcnet_open(struct net_device *dev);
|
int arcnet_open(struct net_device *dev);
|
||||||
int arcnet_close(struct net_device *dev);
|
int arcnet_close(struct net_device *dev);
|
||||||
|
|
|
@ -387,10 +387,44 @@ static void arcnet_timer(struct timer_list *t)
|
||||||
struct arcnet_local *lp = from_timer(lp, t, timer);
|
struct arcnet_local *lp = from_timer(lp, t, timer);
|
||||||
struct net_device *dev = lp->dev;
|
struct net_device *dev = lp->dev;
|
||||||
|
|
||||||
if (!netif_carrier_ok(dev)) {
|
spin_lock_irq(&lp->lock);
|
||||||
|
|
||||||
|
if (!lp->reset_in_progress && !netif_carrier_ok(dev)) {
|
||||||
netif_carrier_on(dev);
|
netif_carrier_on(dev);
|
||||||
netdev_info(dev, "link up\n");
|
netdev_info(dev, "link up\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
spin_unlock_irq(&lp->lock);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void reset_device_work(struct work_struct *work)
|
||||||
|
{
|
||||||
|
struct arcnet_local *lp;
|
||||||
|
struct net_device *dev;
|
||||||
|
|
||||||
|
lp = container_of(work, struct arcnet_local, reset_work);
|
||||||
|
dev = lp->dev;
|
||||||
|
|
||||||
|
/* Do not bring the network interface back up if an ifdown
|
||||||
|
* was already done.
|
||||||
|
*/
|
||||||
|
if (!netif_running(dev) || !lp->reset_in_progress)
|
||||||
|
return;
|
||||||
|
|
||||||
|
rtnl_lock();
|
||||||
|
|
||||||
|
/* Do another check, in case of an ifdown that was triggered in
|
||||||
|
* the small race window between the exit condition above and
|
||||||
|
* acquiring RTNL.
|
||||||
|
*/
|
||||||
|
if (!netif_running(dev) || !lp->reset_in_progress)
|
||||||
|
goto out;
|
||||||
|
|
||||||
|
dev_close(dev);
|
||||||
|
dev_open(dev);
|
||||||
|
|
||||||
|
out:
|
||||||
|
rtnl_unlock();
|
||||||
}
|
}
|
||||||
|
|
||||||
static void arcnet_reply_tasklet(unsigned long data)
|
static void arcnet_reply_tasklet(unsigned long data)
|
||||||
|
@ -452,12 +486,25 @@ struct net_device *alloc_arcdev(const char *name)
|
||||||
lp->dev = dev;
|
lp->dev = dev;
|
||||||
spin_lock_init(&lp->lock);
|
spin_lock_init(&lp->lock);
|
||||||
timer_setup(&lp->timer, arcnet_timer, 0);
|
timer_setup(&lp->timer, arcnet_timer, 0);
|
||||||
|
INIT_WORK(&lp->reset_work, reset_device_work);
|
||||||
}
|
}
|
||||||
|
|
||||||
return dev;
|
return dev;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(alloc_arcdev);
|
EXPORT_SYMBOL(alloc_arcdev);
|
||||||
|
|
||||||
|
void free_arcdev(struct net_device *dev)
|
||||||
|
{
|
||||||
|
struct arcnet_local *lp = netdev_priv(dev);
|
||||||
|
|
||||||
|
/* Do not cancel this at ->ndo_close(), as the workqueue itself
|
||||||
|
* indirectly calls the ifdown path through dev_close().
|
||||||
|
*/
|
||||||
|
cancel_work_sync(&lp->reset_work);
|
||||||
|
free_netdev(dev);
|
||||||
|
}
|
||||||
|
EXPORT_SYMBOL(free_arcdev);
|
||||||
|
|
||||||
/* Open/initialize the board. This is called sometime after booting when
|
/* Open/initialize the board. This is called sometime after booting when
|
||||||
* the 'ifconfig' program is run.
|
* the 'ifconfig' program is run.
|
||||||
*
|
*
|
||||||
|
@ -587,6 +634,10 @@ int arcnet_close(struct net_device *dev)
|
||||||
|
|
||||||
/* shut down the card */
|
/* shut down the card */
|
||||||
lp->hw.close(dev);
|
lp->hw.close(dev);
|
||||||
|
|
||||||
|
/* reset counters */
|
||||||
|
lp->reset_in_progress = 0;
|
||||||
|
|
||||||
module_put(lp->hw.owner);
|
module_put(lp->hw.owner);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -820,6 +871,9 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
|
||||||
|
|
||||||
spin_lock_irqsave(&lp->lock, flags);
|
spin_lock_irqsave(&lp->lock, flags);
|
||||||
|
|
||||||
|
if (lp->reset_in_progress)
|
||||||
|
goto out;
|
||||||
|
|
||||||
/* RESET flag was enabled - if device is not running, we must
|
/* RESET flag was enabled - if device is not running, we must
|
||||||
* clear it right away (but nothing else).
|
* clear it right away (but nothing else).
|
||||||
*/
|
*/
|
||||||
|
@ -852,11 +906,14 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
|
||||||
if (status & RESETflag) {
|
if (status & RESETflag) {
|
||||||
arc_printk(D_NORMAL, dev, "spurious reset (status=%Xh)\n",
|
arc_printk(D_NORMAL, dev, "spurious reset (status=%Xh)\n",
|
||||||
status);
|
status);
|
||||||
arcnet_close(dev);
|
|
||||||
arcnet_open(dev);
|
lp->reset_in_progress = 1;
|
||||||
|
netif_stop_queue(dev);
|
||||||
|
netif_carrier_off(dev);
|
||||||
|
schedule_work(&lp->reset_work);
|
||||||
|
|
||||||
/* get out of the interrupt handler! */
|
/* get out of the interrupt handler! */
|
||||||
break;
|
goto out;
|
||||||
}
|
}
|
||||||
/* RX is inhibited - we must have received something.
|
/* RX is inhibited - we must have received something.
|
||||||
* Prepare to receive into the next buffer.
|
* Prepare to receive into the next buffer.
|
||||||
|
@ -1052,6 +1109,7 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
|
||||||
udelay(1);
|
udelay(1);
|
||||||
lp->hw.intmask(dev, lp->intmask);
|
lp->hw.intmask(dev, lp->intmask);
|
||||||
|
|
||||||
|
out:
|
||||||
spin_unlock_irqrestore(&lp->lock, flags);
|
spin_unlock_irqrestore(&lp->lock, flags);
|
||||||
return retval;
|
return retval;
|
||||||
}
|
}
|
||||||
|
|
|
@ -169,7 +169,7 @@ static int __init com20020_init(void)
|
||||||
dev->irq = 9;
|
dev->irq = 9;
|
||||||
|
|
||||||
if (com20020isa_probe(dev)) {
|
if (com20020isa_probe(dev)) {
|
||||||
free_netdev(dev);
|
free_arcdev(dev);
|
||||||
return -EIO;
|
return -EIO;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -182,7 +182,7 @@ static void __exit com20020_exit(void)
|
||||||
unregister_netdev(my_dev);
|
unregister_netdev(my_dev);
|
||||||
free_irq(my_dev->irq, my_dev);
|
free_irq(my_dev->irq, my_dev);
|
||||||
release_region(my_dev->base_addr, ARCNET_TOTAL_SIZE);
|
release_region(my_dev->base_addr, ARCNET_TOTAL_SIZE);
|
||||||
free_netdev(my_dev);
|
free_arcdev(my_dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifndef MODULE
|
#ifndef MODULE
|
||||||
|
|
|
@ -127,6 +127,8 @@ static int com20020pci_probe(struct pci_dev *pdev,
|
||||||
int i, ioaddr, ret;
|
int i, ioaddr, ret;
|
||||||
struct resource *r;
|
struct resource *r;
|
||||||
|
|
||||||
|
ret = 0;
|
||||||
|
|
||||||
if (pci_enable_device(pdev))
|
if (pci_enable_device(pdev))
|
||||||
return -EIO;
|
return -EIO;
|
||||||
|
|
||||||
|
@ -142,6 +144,8 @@ static int com20020pci_probe(struct pci_dev *pdev,
|
||||||
priv->ci = ci;
|
priv->ci = ci;
|
||||||
mm = &ci->misc_map;
|
mm = &ci->misc_map;
|
||||||
|
|
||||||
|
pci_set_drvdata(pdev, priv);
|
||||||
|
|
||||||
INIT_LIST_HEAD(&priv->list_dev);
|
INIT_LIST_HEAD(&priv->list_dev);
|
||||||
|
|
||||||
if (mm->size) {
|
if (mm->size) {
|
||||||
|
@ -164,7 +168,7 @@ static int com20020pci_probe(struct pci_dev *pdev,
|
||||||
dev = alloc_arcdev(device);
|
dev = alloc_arcdev(device);
|
||||||
if (!dev) {
|
if (!dev) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
goto out_port;
|
break;
|
||||||
}
|
}
|
||||||
dev->dev_port = i;
|
dev->dev_port = i;
|
||||||
|
|
||||||
|
@ -181,7 +185,7 @@ static int com20020pci_probe(struct pci_dev *pdev,
|
||||||
pr_err("IO region %xh-%xh already allocated\n",
|
pr_err("IO region %xh-%xh already allocated\n",
|
||||||
ioaddr, ioaddr + cm->size - 1);
|
ioaddr, ioaddr + cm->size - 1);
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto out_port;
|
goto err_free_arcdev;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Dummy access after Reset
|
/* Dummy access after Reset
|
||||||
|
@ -209,76 +213,79 @@ static int com20020pci_probe(struct pci_dev *pdev,
|
||||||
if (!strncmp(ci->name, "EAE PLX-PCI FB2", 15))
|
if (!strncmp(ci->name, "EAE PLX-PCI FB2", 15))
|
||||||
lp->backplane = 1;
|
lp->backplane = 1;
|
||||||
|
|
||||||
/* Get the dev_id from the PLX rotary coder */
|
if (ci->flags & ARC_HAS_ROTARY) {
|
||||||
if (!strncmp(ci->name, "EAE PLX-PCI MA1", 15))
|
/* Get the dev_id from the PLX rotary coder */
|
||||||
dev_id_mask = 0x3;
|
if (!strncmp(ci->name, "EAE PLX-PCI MA1", 15))
|
||||||
dev->dev_id = (inb(priv->misc + ci->rotary) >> 4) & dev_id_mask;
|
dev_id_mask = 0x3;
|
||||||
|
dev->dev_id = (inb(priv->misc + ci->rotary) >> 4) & dev_id_mask;
|
||||||
snprintf(dev->name, sizeof(dev->name), "arc%d-%d", dev->dev_id, i);
|
snprintf(dev->name, sizeof(dev->name), "arc%d-%d", dev->dev_id, i);
|
||||||
|
}
|
||||||
|
|
||||||
if (arcnet_inb(ioaddr, COM20020_REG_R_STATUS) == 0xFF) {
|
if (arcnet_inb(ioaddr, COM20020_REG_R_STATUS) == 0xFF) {
|
||||||
pr_err("IO address %Xh is empty!\n", ioaddr);
|
pr_err("IO address %Xh is empty!\n", ioaddr);
|
||||||
ret = -EIO;
|
ret = -EIO;
|
||||||
goto out_port;
|
goto err_free_arcdev;
|
||||||
}
|
}
|
||||||
if (com20020_check(dev)) {
|
if (com20020_check(dev)) {
|
||||||
ret = -EIO;
|
ret = -EIO;
|
||||||
goto out_port;
|
goto err_free_arcdev;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ret = com20020_found(dev, IRQF_SHARED);
|
||||||
|
if (ret)
|
||||||
|
goto err_free_arcdev;
|
||||||
|
|
||||||
card = devm_kzalloc(&pdev->dev, sizeof(struct com20020_dev),
|
card = devm_kzalloc(&pdev->dev, sizeof(struct com20020_dev),
|
||||||
GFP_KERNEL);
|
GFP_KERNEL);
|
||||||
if (!card) {
|
if (!card) {
|
||||||
ret = -ENOMEM;
|
ret = -ENOMEM;
|
||||||
goto out_port;
|
goto err_free_arcdev;
|
||||||
}
|
}
|
||||||
|
|
||||||
card->index = i;
|
card->index = i;
|
||||||
card->pci_priv = priv;
|
card->pci_priv = priv;
|
||||||
card->tx_led.brightness_set = led_tx_set;
|
|
||||||
card->tx_led.default_trigger = devm_kasprintf(&pdev->dev,
|
|
||||||
GFP_KERNEL, "arc%d-%d-tx",
|
|
||||||
dev->dev_id, i);
|
|
||||||
card->tx_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
|
|
||||||
"pci:green:tx:%d-%d",
|
|
||||||
dev->dev_id, i);
|
|
||||||
|
|
||||||
card->tx_led.dev = &dev->dev;
|
if (ci->flags & ARC_HAS_LED) {
|
||||||
card->recon_led.brightness_set = led_recon_set;
|
card->tx_led.brightness_set = led_tx_set;
|
||||||
card->recon_led.default_trigger = devm_kasprintf(&pdev->dev,
|
card->tx_led.default_trigger = devm_kasprintf(&pdev->dev,
|
||||||
GFP_KERNEL, "arc%d-%d-recon",
|
GFP_KERNEL, "arc%d-%d-tx",
|
||||||
dev->dev_id, i);
|
dev->dev_id, i);
|
||||||
card->recon_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
|
card->tx_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
|
||||||
"pci:red:recon:%d-%d",
|
"pci:green:tx:%d-%d",
|
||||||
dev->dev_id, i);
|
dev->dev_id, i);
|
||||||
card->recon_led.dev = &dev->dev;
|
|
||||||
|
card->tx_led.dev = &dev->dev;
|
||||||
|
card->recon_led.brightness_set = led_recon_set;
|
||||||
|
card->recon_led.default_trigger = devm_kasprintf(&pdev->dev,
|
||||||
|
GFP_KERNEL, "arc%d-%d-recon",
|
||||||
|
dev->dev_id, i);
|
||||||
|
card->recon_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
|
||||||
|
"pci:red:recon:%d-%d",
|
||||||
|
dev->dev_id, i);
|
||||||
|
card->recon_led.dev = &dev->dev;
|
||||||
|
|
||||||
|
ret = devm_led_classdev_register(&pdev->dev, &card->tx_led);
|
||||||
|
if (ret)
|
||||||
|
goto err_free_arcdev;
|
||||||
|
|
||||||
|
ret = devm_led_classdev_register(&pdev->dev, &card->recon_led);
|
||||||
|
if (ret)
|
||||||
|
goto err_free_arcdev;
|
||||||
|
|
||||||
|
dev_set_drvdata(&dev->dev, card);
|
||||||
|
devm_arcnet_led_init(dev, dev->dev_id, i);
|
||||||
|
}
|
||||||
|
|
||||||
card->dev = dev;
|
card->dev = dev;
|
||||||
|
|
||||||
ret = devm_led_classdev_register(&pdev->dev, &card->tx_led);
|
|
||||||
if (ret)
|
|
||||||
goto out_port;
|
|
||||||
|
|
||||||
ret = devm_led_classdev_register(&pdev->dev, &card->recon_led);
|
|
||||||
if (ret)
|
|
||||||
goto out_port;
|
|
||||||
|
|
||||||
dev_set_drvdata(&dev->dev, card);
|
|
||||||
|
|
||||||
ret = com20020_found(dev, IRQF_SHARED);
|
|
||||||
if (ret)
|
|
||||||
goto out_port;
|
|
||||||
|
|
||||||
devm_arcnet_led_init(dev, dev->dev_id, i);
|
|
||||||
|
|
||||||
list_add(&card->list, &priv->list_dev);
|
list_add(&card->list, &priv->list_dev);
|
||||||
|
continue;
|
||||||
|
|
||||||
|
err_free_arcdev:
|
||||||
|
free_arcdev(dev);
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
if (ret)
|
||||||
pci_set_drvdata(pdev, priv);
|
com20020pci_remove(pdev);
|
||||||
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
out_port:
|
|
||||||
com20020pci_remove(pdev);
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -294,7 +301,7 @@ static void com20020pci_remove(struct pci_dev *pdev)
|
||||||
|
|
||||||
unregister_netdev(dev);
|
unregister_netdev(dev);
|
||||||
free_irq(dev->irq, dev);
|
free_irq(dev->irq, dev);
|
||||||
free_netdev(dev);
|
free_arcdev(dev);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -325,7 +332,7 @@ static struct com20020_pci_card_info card_info_5mbit = {
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct com20020_pci_card_info card_info_sohard = {
|
static struct com20020_pci_card_info card_info_sohard = {
|
||||||
.name = "PLX-PCI",
|
.name = "SOHARD SH ARC-PCI",
|
||||||
.devcount = 1,
|
.devcount = 1,
|
||||||
/* SOHARD needs PCI base addr 4 */
|
/* SOHARD needs PCI base addr 4 */
|
||||||
.chan_map_tbl = {
|
.chan_map_tbl = {
|
||||||
|
@ -360,7 +367,7 @@ static struct com20020_pci_card_info card_info_eae_arc1 = {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
.rotary = 0x0,
|
.rotary = 0x0,
|
||||||
.flags = ARC_CAN_10MBIT,
|
.flags = ARC_HAS_ROTARY | ARC_HAS_LED | ARC_CAN_10MBIT,
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct com20020_pci_card_info card_info_eae_ma1 = {
|
static struct com20020_pci_card_info card_info_eae_ma1 = {
|
||||||
|
@ -392,7 +399,7 @@ static struct com20020_pci_card_info card_info_eae_ma1 = {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
.rotary = 0x0,
|
.rotary = 0x0,
|
||||||
.flags = ARC_CAN_10MBIT,
|
.flags = ARC_HAS_ROTARY | ARC_HAS_LED | ARC_CAN_10MBIT,
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct com20020_pci_card_info card_info_eae_fb2 = {
|
static struct com20020_pci_card_info card_info_eae_fb2 = {
|
||||||
|
@ -417,7 +424,7 @@ static struct com20020_pci_card_info card_info_eae_fb2 = {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
.rotary = 0x0,
|
.rotary = 0x0,
|
||||||
.flags = ARC_CAN_10MBIT,
|
.flags = ARC_HAS_ROTARY | ARC_HAS_LED | ARC_CAN_10MBIT,
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct pci_device_id com20020pci_id_table[] = {
|
static const struct pci_device_id com20020pci_id_table[] = {
|
||||||
|
|
|
@ -177,7 +177,7 @@ static void com20020_detach(struct pcmcia_device *link)
|
||||||
dev = info->dev;
|
dev = info->dev;
|
||||||
if (dev) {
|
if (dev) {
|
||||||
dev_dbg(&link->dev, "kfree...\n");
|
dev_dbg(&link->dev, "kfree...\n");
|
||||||
free_netdev(dev);
|
free_arcdev(dev);
|
||||||
}
|
}
|
||||||
dev_dbg(&link->dev, "kfree2...\n");
|
dev_dbg(&link->dev, "kfree2...\n");
|
||||||
kfree(info);
|
kfree(info);
|
||||||
|
|
|
@ -394,7 +394,7 @@ static int __init com90io_init(void)
|
||||||
err = com90io_probe(dev);
|
err = com90io_probe(dev);
|
||||||
|
|
||||||
if (err) {
|
if (err) {
|
||||||
free_netdev(dev);
|
free_arcdev(dev);
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -417,7 +417,7 @@ static void __exit com90io_exit(void)
|
||||||
|
|
||||||
free_irq(dev->irq, dev);
|
free_irq(dev->irq, dev);
|
||||||
release_region(dev->base_addr, ARCNET_TOTAL_SIZE);
|
release_region(dev->base_addr, ARCNET_TOTAL_SIZE);
|
||||||
free_netdev(dev);
|
free_arcdev(dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
module_init(com90io_init)
|
module_init(com90io_init)
|
||||||
|
|
|
@ -554,7 +554,7 @@ err_free_irq:
|
||||||
err_release_mem:
|
err_release_mem:
|
||||||
release_mem_region(dev->mem_start, dev->mem_end - dev->mem_start + 1);
|
release_mem_region(dev->mem_start, dev->mem_end - dev->mem_start + 1);
|
||||||
err_free_dev:
|
err_free_dev:
|
||||||
free_netdev(dev);
|
free_arcdev(dev);
|
||||||
return -EIO;
|
return -EIO;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -672,7 +672,7 @@ static void __exit com90xx_exit(void)
|
||||||
release_region(dev->base_addr, ARCNET_TOTAL_SIZE);
|
release_region(dev->base_addr, ARCNET_TOTAL_SIZE);
|
||||||
release_mem_region(dev->mem_start,
|
release_mem_region(dev->mem_start,
|
||||||
dev->mem_end - dev->mem_start + 1);
|
dev->mem_end - dev->mem_start + 1);
|
||||||
free_netdev(dev);
|
free_arcdev(dev);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -683,10 +683,24 @@ static void xgbe_service(struct work_struct *work)
|
||||||
static void xgbe_service_timer(struct timer_list *t)
|
static void xgbe_service_timer(struct timer_list *t)
|
||||||
{
|
{
|
||||||
struct xgbe_prv_data *pdata = from_timer(pdata, t, service_timer);
|
struct xgbe_prv_data *pdata = from_timer(pdata, t, service_timer);
|
||||||
|
struct xgbe_channel *channel;
|
||||||
|
unsigned int i;
|
||||||
|
|
||||||
queue_work(pdata->dev_workqueue, &pdata->service_work);
|
queue_work(pdata->dev_workqueue, &pdata->service_work);
|
||||||
|
|
||||||
mod_timer(&pdata->service_timer, jiffies + HZ);
|
mod_timer(&pdata->service_timer, jiffies + HZ);
|
||||||
|
|
||||||
|
if (!pdata->tx_usecs)
|
||||||
|
return;
|
||||||
|
|
||||||
|
for (i = 0; i < pdata->channel_count; i++) {
|
||||||
|
channel = pdata->channel[i];
|
||||||
|
if (!channel->tx_ring || channel->tx_timer_active)
|
||||||
|
break;
|
||||||
|
channel->tx_timer_active = 1;
|
||||||
|
mod_timer(&channel->tx_timer,
|
||||||
|
jiffies + usecs_to_jiffies(pdata->tx_usecs));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void xgbe_init_timers(struct xgbe_prv_data *pdata)
|
static void xgbe_init_timers(struct xgbe_prv_data *pdata)
|
||||||
|
|
|
@ -314,10 +314,15 @@ static int xgbe_get_link_ksettings(struct net_device *netdev,
|
||||||
|
|
||||||
cmd->base.phy_address = pdata->phy.address;
|
cmd->base.phy_address = pdata->phy.address;
|
||||||
|
|
||||||
cmd->base.autoneg = pdata->phy.autoneg;
|
if (netif_carrier_ok(netdev)) {
|
||||||
cmd->base.speed = pdata->phy.speed;
|
cmd->base.speed = pdata->phy.speed;
|
||||||
cmd->base.duplex = pdata->phy.duplex;
|
cmd->base.duplex = pdata->phy.duplex;
|
||||||
|
} else {
|
||||||
|
cmd->base.speed = SPEED_UNKNOWN;
|
||||||
|
cmd->base.duplex = DUPLEX_UNKNOWN;
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd->base.autoneg = pdata->phy.autoneg;
|
||||||
cmd->base.port = PORT_NONE;
|
cmd->base.port = PORT_NONE;
|
||||||
|
|
||||||
XGBE_LM_COPY(cmd, supported, lks, supported);
|
XGBE_LM_COPY(cmd, supported, lks, supported);
|
||||||
|
|
|
@ -1178,7 +1178,19 @@ static int xgbe_phy_config_fixed(struct xgbe_prv_data *pdata)
|
||||||
if (pdata->phy.duplex != DUPLEX_FULL)
|
if (pdata->phy.duplex != DUPLEX_FULL)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
xgbe_set_mode(pdata, mode);
|
/* Force the mode change for SFI in Fixed PHY config.
|
||||||
|
* Fixed PHY configs needs PLL to be enabled while doing mode set.
|
||||||
|
* When the SFP module isn't connected during boot, driver assumes
|
||||||
|
* AN is ON and attempts autonegotiation. However, if the connected
|
||||||
|
* SFP comes up in Fixed PHY config, the link will not come up as
|
||||||
|
* PLL isn't enabled while the initial mode set command is issued.
|
||||||
|
* So, force the mode change for SFI in Fixed PHY configuration to
|
||||||
|
* fix link issues.
|
||||||
|
*/
|
||||||
|
if (mode == XGBE_MODE_SFI)
|
||||||
|
xgbe_change_mode(pdata, mode);
|
||||||
|
else
|
||||||
|
xgbe_set_mode(pdata, mode);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -881,10 +881,13 @@ static int atl1e_setup_ring_resources(struct atl1e_adapter *adapter)
|
||||||
netdev_err(adapter->netdev, "offset(%d) > ring size(%d) !!\n",
|
netdev_err(adapter->netdev, "offset(%d) > ring size(%d) !!\n",
|
||||||
offset, adapter->ring_size);
|
offset, adapter->ring_size);
|
||||||
err = -1;
|
err = -1;
|
||||||
goto failed;
|
goto free_buffer;
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
free_buffer:
|
||||||
|
kfree(tx_ring->tx_buffer);
|
||||||
|
tx_ring->tx_buffer = NULL;
|
||||||
failed:
|
failed:
|
||||||
if (adapter->ring_vir_addr != NULL) {
|
if (adapter->ring_vir_addr != NULL) {
|
||||||
pci_free_consistent(pdev, adapter->ring_size,
|
pci_free_consistent(pdev, adapter->ring_size,
|
||||||
|
|
|
@ -6859,7 +6859,7 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget)
|
||||||
desc_idx, *post_ptr);
|
desc_idx, *post_ptr);
|
||||||
drop_it_no_recycle:
|
drop_it_no_recycle:
|
||||||
/* Other statistics kept track of by card. */
|
/* Other statistics kept track of by card. */
|
||||||
tp->rx_dropped++;
|
tnapi->rx_dropped++;
|
||||||
goto next_pkt;
|
goto next_pkt;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -7889,8 +7889,10 @@ static int tg3_tso_bug(struct tg3 *tp, struct tg3_napi *tnapi,
|
||||||
|
|
||||||
segs = skb_gso_segment(skb, tp->dev->features &
|
segs = skb_gso_segment(skb, tp->dev->features &
|
||||||
~(NETIF_F_TSO | NETIF_F_TSO6));
|
~(NETIF_F_TSO | NETIF_F_TSO6));
|
||||||
if (IS_ERR(segs) || !segs)
|
if (IS_ERR(segs) || !segs) {
|
||||||
|
tnapi->tx_dropped++;
|
||||||
goto tg3_tso_bug_end;
|
goto tg3_tso_bug_end;
|
||||||
|
}
|
||||||
|
|
||||||
do {
|
do {
|
||||||
nskb = segs;
|
nskb = segs;
|
||||||
|
@ -8163,7 +8165,7 @@ dma_error:
|
||||||
drop:
|
drop:
|
||||||
dev_kfree_skb_any(skb);
|
dev_kfree_skb_any(skb);
|
||||||
drop_nofree:
|
drop_nofree:
|
||||||
tp->tx_dropped++;
|
tnapi->tx_dropped++;
|
||||||
return NETDEV_TX_OK;
|
return NETDEV_TX_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -9342,7 +9344,7 @@ static void __tg3_set_rx_mode(struct net_device *);
|
||||||
/* tp->lock is held. */
|
/* tp->lock is held. */
|
||||||
static int tg3_halt(struct tg3 *tp, int kind, bool silent)
|
static int tg3_halt(struct tg3 *tp, int kind, bool silent)
|
||||||
{
|
{
|
||||||
int err;
|
int err, i;
|
||||||
|
|
||||||
tg3_stop_fw(tp);
|
tg3_stop_fw(tp);
|
||||||
|
|
||||||
|
@ -9363,6 +9365,13 @@ static int tg3_halt(struct tg3 *tp, int kind, bool silent)
|
||||||
|
|
||||||
/* And make sure the next sample is new data */
|
/* And make sure the next sample is new data */
|
||||||
memset(tp->hw_stats, 0, sizeof(struct tg3_hw_stats));
|
memset(tp->hw_stats, 0, sizeof(struct tg3_hw_stats));
|
||||||
|
|
||||||
|
for (i = 0; i < TG3_IRQ_MAX_VECS; ++i) {
|
||||||
|
struct tg3_napi *tnapi = &tp->napi[i];
|
||||||
|
|
||||||
|
tnapi->rx_dropped = 0;
|
||||||
|
tnapi->tx_dropped = 0;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
|
@ -11919,6 +11928,9 @@ static void tg3_get_nstats(struct tg3 *tp, struct rtnl_link_stats64 *stats)
|
||||||
{
|
{
|
||||||
struct rtnl_link_stats64 *old_stats = &tp->net_stats_prev;
|
struct rtnl_link_stats64 *old_stats = &tp->net_stats_prev;
|
||||||
struct tg3_hw_stats *hw_stats = tp->hw_stats;
|
struct tg3_hw_stats *hw_stats = tp->hw_stats;
|
||||||
|
unsigned long rx_dropped;
|
||||||
|
unsigned long tx_dropped;
|
||||||
|
int i;
|
||||||
|
|
||||||
stats->rx_packets = old_stats->rx_packets +
|
stats->rx_packets = old_stats->rx_packets +
|
||||||
get_stat64(&hw_stats->rx_ucast_packets) +
|
get_stat64(&hw_stats->rx_ucast_packets) +
|
||||||
|
@ -11965,8 +11977,26 @@ static void tg3_get_nstats(struct tg3 *tp, struct rtnl_link_stats64 *stats)
|
||||||
stats->rx_missed_errors = old_stats->rx_missed_errors +
|
stats->rx_missed_errors = old_stats->rx_missed_errors +
|
||||||
get_stat64(&hw_stats->rx_discards);
|
get_stat64(&hw_stats->rx_discards);
|
||||||
|
|
||||||
stats->rx_dropped = tp->rx_dropped;
|
/* Aggregate per-queue counters. The per-queue counters are updated
|
||||||
stats->tx_dropped = tp->tx_dropped;
|
* by a single writer, race-free. The result computed by this loop
|
||||||
|
* might not be 100% accurate (counters can be updated in the middle of
|
||||||
|
* the loop) but the next tg3_get_nstats() will recompute the current
|
||||||
|
* value so it is acceptable.
|
||||||
|
*
|
||||||
|
* Note that these counters wrap around at 4G on 32bit machines.
|
||||||
|
*/
|
||||||
|
rx_dropped = (unsigned long)(old_stats->rx_dropped);
|
||||||
|
tx_dropped = (unsigned long)(old_stats->tx_dropped);
|
||||||
|
|
||||||
|
for (i = 0; i < tp->irq_cnt; i++) {
|
||||||
|
struct tg3_napi *tnapi = &tp->napi[i];
|
||||||
|
|
||||||
|
rx_dropped += tnapi->rx_dropped;
|
||||||
|
tx_dropped += tnapi->tx_dropped;
|
||||||
|
}
|
||||||
|
|
||||||
|
stats->rx_dropped = rx_dropped;
|
||||||
|
stats->tx_dropped = tx_dropped;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int tg3_get_regs_len(struct net_device *dev)
|
static int tg3_get_regs_len(struct net_device *dev)
|
||||||
|
|
|
@ -3018,6 +3018,7 @@ struct tg3_napi {
|
||||||
u16 *rx_rcb_prod_idx;
|
u16 *rx_rcb_prod_idx;
|
||||||
struct tg3_rx_prodring_set prodring;
|
struct tg3_rx_prodring_set prodring;
|
||||||
struct tg3_rx_buffer_desc *rx_rcb;
|
struct tg3_rx_buffer_desc *rx_rcb;
|
||||||
|
unsigned long rx_dropped;
|
||||||
|
|
||||||
u32 tx_prod ____cacheline_aligned;
|
u32 tx_prod ____cacheline_aligned;
|
||||||
u32 tx_cons;
|
u32 tx_cons;
|
||||||
|
@ -3026,6 +3027,7 @@ struct tg3_napi {
|
||||||
u32 prodmbox;
|
u32 prodmbox;
|
||||||
struct tg3_tx_buffer_desc *tx_ring;
|
struct tg3_tx_buffer_desc *tx_ring;
|
||||||
struct tg3_tx_ring_info *tx_buffers;
|
struct tg3_tx_ring_info *tx_buffers;
|
||||||
|
unsigned long tx_dropped;
|
||||||
|
|
||||||
dma_addr_t status_mapping;
|
dma_addr_t status_mapping;
|
||||||
dma_addr_t rx_rcb_mapping;
|
dma_addr_t rx_rcb_mapping;
|
||||||
|
@ -3219,8 +3221,6 @@ struct tg3 {
|
||||||
|
|
||||||
|
|
||||||
/* begin "everything else" cacheline(s) section */
|
/* begin "everything else" cacheline(s) section */
|
||||||
unsigned long rx_dropped;
|
|
||||||
unsigned long tx_dropped;
|
|
||||||
struct rtnl_link_stats64 net_stats_prev;
|
struct rtnl_link_stats64 net_stats_prev;
|
||||||
struct tg3_ethtool_stats estats_prev;
|
struct tg3_ethtool_stats estats_prev;
|
||||||
|
|
||||||
|
|
|
@ -70,6 +70,27 @@ static enum mac_mode hns_get_enet_interface(const struct hns_mac_cb *mac_cb)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static u32 hns_mac_link_anti_shake(struct mac_driver *mac_ctrl_drv)
|
||||||
|
{
|
||||||
|
#define HNS_MAC_LINK_WAIT_TIME 5
|
||||||
|
#define HNS_MAC_LINK_WAIT_CNT 40
|
||||||
|
|
||||||
|
u32 link_status = 0;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
if (!mac_ctrl_drv->get_link_status)
|
||||||
|
return link_status;
|
||||||
|
|
||||||
|
for (i = 0; i < HNS_MAC_LINK_WAIT_CNT; i++) {
|
||||||
|
msleep(HNS_MAC_LINK_WAIT_TIME);
|
||||||
|
mac_ctrl_drv->get_link_status(mac_ctrl_drv, &link_status);
|
||||||
|
if (!link_status)
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
return link_status;
|
||||||
|
}
|
||||||
|
|
||||||
void hns_mac_get_link_status(struct hns_mac_cb *mac_cb, u32 *link_status)
|
void hns_mac_get_link_status(struct hns_mac_cb *mac_cb, u32 *link_status)
|
||||||
{
|
{
|
||||||
struct mac_driver *mac_ctrl_drv;
|
struct mac_driver *mac_ctrl_drv;
|
||||||
|
@ -87,6 +108,14 @@ void hns_mac_get_link_status(struct hns_mac_cb *mac_cb, u32 *link_status)
|
||||||
&sfp_prsnt);
|
&sfp_prsnt);
|
||||||
if (!ret)
|
if (!ret)
|
||||||
*link_status = *link_status && sfp_prsnt;
|
*link_status = *link_status && sfp_prsnt;
|
||||||
|
|
||||||
|
/* for FIBER port, it may have a fake link up.
|
||||||
|
* when the link status changes from down to up, we need to do
|
||||||
|
* anti-shake. the anti-shake time is base on tests.
|
||||||
|
* only FIBER port need to do this.
|
||||||
|
*/
|
||||||
|
if (*link_status && !mac_cb->link)
|
||||||
|
*link_status = hns_mac_link_anti_shake(mac_ctrl_drv);
|
||||||
}
|
}
|
||||||
|
|
||||||
mac_cb->link = *link_status;
|
mac_cb->link = *link_status;
|
||||||
|
|
|
@ -651,8 +651,8 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work)
|
||||||
get_block_timestamp(tracer, &tmp_trace_block[TRACES_PER_BLOCK - 1]);
|
get_block_timestamp(tracer, &tmp_trace_block[TRACES_PER_BLOCK - 1]);
|
||||||
|
|
||||||
while (block_timestamp > tracer->last_timestamp) {
|
while (block_timestamp > tracer->last_timestamp) {
|
||||||
/* Check block override if its not the first block */
|
/* Check block override if it's not the first block */
|
||||||
if (!tracer->last_timestamp) {
|
if (tracer->last_timestamp) {
|
||||||
u64 *ts_event;
|
u64 *ts_event;
|
||||||
/* To avoid block override be the HW in case of buffer
|
/* To avoid block override be the HW in case of buffer
|
||||||
* wraparound, the time stamp of the previous block
|
* wraparound, the time stamp of the previous block
|
||||||
|
|
|
@ -1024,6 +1024,7 @@ static void qed_ilt_shadow_free(struct qed_hwfn *p_hwfn)
|
||||||
p_dma->p_virt = NULL;
|
p_dma->p_virt = NULL;
|
||||||
}
|
}
|
||||||
kfree(p_mngr->ilt_shadow);
|
kfree(p_mngr->ilt_shadow);
|
||||||
|
p_mngr->ilt_shadow = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int qed_ilt_blk_alloc(struct qed_hwfn *p_hwfn,
|
static int qed_ilt_blk_alloc(struct qed_hwfn *p_hwfn,
|
||||||
|
|
|
@ -30,6 +30,8 @@
|
||||||
|
|
||||||
#define QCASPI_MAX_REGS 0x20
|
#define QCASPI_MAX_REGS 0x20
|
||||||
|
|
||||||
|
#define QCASPI_RX_MAX_FRAMES 4
|
||||||
|
|
||||||
static const u16 qcaspi_spi_regs[] = {
|
static const u16 qcaspi_spi_regs[] = {
|
||||||
SPI_REG_BFR_SIZE,
|
SPI_REG_BFR_SIZE,
|
||||||
SPI_REG_WRBUF_SPC_AVA,
|
SPI_REG_WRBUF_SPC_AVA,
|
||||||
|
@ -266,31 +268,30 @@ qcaspi_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ring)
|
||||||
{
|
{
|
||||||
struct qcaspi *qca = netdev_priv(dev);
|
struct qcaspi *qca = netdev_priv(dev);
|
||||||
|
|
||||||
ring->rx_max_pending = 4;
|
ring->rx_max_pending = QCASPI_RX_MAX_FRAMES;
|
||||||
ring->tx_max_pending = TX_RING_MAX_LEN;
|
ring->tx_max_pending = TX_RING_MAX_LEN;
|
||||||
ring->rx_pending = 4;
|
ring->rx_pending = QCASPI_RX_MAX_FRAMES;
|
||||||
ring->tx_pending = qca->txr.count;
|
ring->tx_pending = qca->txr.count;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int
|
static int
|
||||||
qcaspi_set_ringparam(struct net_device *dev, struct ethtool_ringparam *ring)
|
qcaspi_set_ringparam(struct net_device *dev, struct ethtool_ringparam *ring)
|
||||||
{
|
{
|
||||||
const struct net_device_ops *ops = dev->netdev_ops;
|
|
||||||
struct qcaspi *qca = netdev_priv(dev);
|
struct qcaspi *qca = netdev_priv(dev);
|
||||||
|
|
||||||
if ((ring->rx_pending) ||
|
if (ring->rx_pending != QCASPI_RX_MAX_FRAMES ||
|
||||||
(ring->rx_mini_pending) ||
|
(ring->rx_mini_pending) ||
|
||||||
(ring->rx_jumbo_pending))
|
(ring->rx_jumbo_pending))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (netif_running(dev))
|
if (qca->spi_thread)
|
||||||
ops->ndo_stop(dev);
|
kthread_park(qca->spi_thread);
|
||||||
|
|
||||||
qca->txr.count = max_t(u32, ring->tx_pending, TX_RING_MIN_LEN);
|
qca->txr.count = max_t(u32, ring->tx_pending, TX_RING_MIN_LEN);
|
||||||
qca->txr.count = min_t(u16, qca->txr.count, TX_RING_MAX_LEN);
|
qca->txr.count = min_t(u16, qca->txr.count, TX_RING_MAX_LEN);
|
||||||
|
|
||||||
if (netif_running(dev))
|
if (qca->spi_thread)
|
||||||
ops->ndo_open(dev);
|
kthread_unpark(qca->spi_thread);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -552,6 +552,18 @@ qcaspi_spi_thread(void *data)
|
||||||
netdev_info(qca->net_dev, "SPI thread created\n");
|
netdev_info(qca->net_dev, "SPI thread created\n");
|
||||||
while (!kthread_should_stop()) {
|
while (!kthread_should_stop()) {
|
||||||
set_current_state(TASK_INTERRUPTIBLE);
|
set_current_state(TASK_INTERRUPTIBLE);
|
||||||
|
if (kthread_should_park()) {
|
||||||
|
netif_tx_disable(qca->net_dev);
|
||||||
|
netif_carrier_off(qca->net_dev);
|
||||||
|
qcaspi_flush_tx_ring(qca);
|
||||||
|
kthread_parkme();
|
||||||
|
if (qca->sync == QCASPI_SYNC_READY) {
|
||||||
|
netif_carrier_on(qca->net_dev);
|
||||||
|
netif_wake_queue(qca->net_dev);
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
if ((qca->intr_req == qca->intr_svc) &&
|
if ((qca->intr_req == qca->intr_svc) &&
|
||||||
!qca->txr.skb[qca->txr.head])
|
!qca->txr.skb[qca->txr.head])
|
||||||
schedule();
|
schedule();
|
||||||
|
@ -580,11 +592,17 @@ qcaspi_spi_thread(void *data)
|
||||||
if (intr_cause & SPI_INT_CPU_ON) {
|
if (intr_cause & SPI_INT_CPU_ON) {
|
||||||
qcaspi_qca7k_sync(qca, QCASPI_EVENT_CPUON);
|
qcaspi_qca7k_sync(qca, QCASPI_EVENT_CPUON);
|
||||||
|
|
||||||
|
/* Frame decoding in progress */
|
||||||
|
if (qca->frm_handle.state != qca->frm_handle.init)
|
||||||
|
qca->net_dev->stats.rx_dropped++;
|
||||||
|
|
||||||
|
qcafrm_fsm_init_spi(&qca->frm_handle);
|
||||||
|
qca->stats.device_reset++;
|
||||||
|
|
||||||
/* not synced. */
|
/* not synced. */
|
||||||
if (qca->sync != QCASPI_SYNC_READY)
|
if (qca->sync != QCASPI_SYNC_READY)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
qca->stats.device_reset++;
|
|
||||||
netif_wake_queue(qca->net_dev);
|
netif_wake_queue(qca->net_dev);
|
||||||
netif_carrier_on(qca->net_dev);
|
netif_carrier_on(qca->net_dev);
|
||||||
}
|
}
|
||||||
|
|
|
@ -1392,13 +1392,13 @@ static int ravb_open(struct net_device *ndev)
|
||||||
if (priv->chip_id == RCAR_GEN2)
|
if (priv->chip_id == RCAR_GEN2)
|
||||||
ravb_ptp_init(ndev, priv->pdev);
|
ravb_ptp_init(ndev, priv->pdev);
|
||||||
|
|
||||||
netif_tx_start_all_queues(ndev);
|
|
||||||
|
|
||||||
/* PHY control start */
|
/* PHY control start */
|
||||||
error = ravb_phy_start(ndev);
|
error = ravb_phy_start(ndev);
|
||||||
if (error)
|
if (error)
|
||||||
goto out_ptp_stop;
|
goto out_ptp_stop;
|
||||||
|
|
||||||
|
netif_tx_start_all_queues(ndev);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
out_ptp_stop:
|
out_ptp_stop:
|
||||||
|
@ -1447,6 +1447,12 @@ static void ravb_tx_timeout_work(struct work_struct *work)
|
||||||
struct net_device *ndev = priv->ndev;
|
struct net_device *ndev = priv->ndev;
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
|
if (!rtnl_trylock()) {
|
||||||
|
usleep_range(1000, 2000);
|
||||||
|
schedule_work(&priv->work);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
netif_tx_stop_all_queues(ndev);
|
netif_tx_stop_all_queues(ndev);
|
||||||
|
|
||||||
/* Stop PTP Clock driver */
|
/* Stop PTP Clock driver */
|
||||||
|
@ -1479,7 +1485,7 @@ static void ravb_tx_timeout_work(struct work_struct *work)
|
||||||
*/
|
*/
|
||||||
netdev_err(ndev, "%s: ravb_dmac_init() failed, error %d\n",
|
netdev_err(ndev, "%s: ravb_dmac_init() failed, error %d\n",
|
||||||
__func__, error);
|
__func__, error);
|
||||||
return;
|
goto out_unlock;
|
||||||
}
|
}
|
||||||
ravb_emac_init(ndev);
|
ravb_emac_init(ndev);
|
||||||
|
|
||||||
|
@ -1489,6 +1495,9 @@ out:
|
||||||
ravb_ptp_init(ndev, priv->pdev);
|
ravb_ptp_init(ndev, priv->pdev);
|
||||||
|
|
||||||
netif_tx_start_all_queues(ndev);
|
netif_tx_start_all_queues(ndev);
|
||||||
|
|
||||||
|
out_unlock:
|
||||||
|
rtnl_unlock();
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Packet transmit function for Ethernet AVB */
|
/* Packet transmit function for Ethernet AVB */
|
||||||
|
|
|
@ -360,7 +360,11 @@ int stmmac_mdio_register(struct net_device *ndev)
|
||||||
new_bus->parent = priv->device;
|
new_bus->parent = priv->device;
|
||||||
|
|
||||||
err = of_mdiobus_register(new_bus, mdio_node);
|
err = of_mdiobus_register(new_bus, mdio_node);
|
||||||
if (err != 0) {
|
if (err == -ENODEV) {
|
||||||
|
err = 0;
|
||||||
|
dev_info(dev, "MDIO bus is disabled\n");
|
||||||
|
goto bus_register_fail;
|
||||||
|
} else if (err) {
|
||||||
dev_err(dev, "Cannot register the MDIO bus\n");
|
dev_err(dev, "Cannot register the MDIO bus\n");
|
||||||
goto bus_register_fail;
|
goto bus_register_fail;
|
||||||
}
|
}
|
||||||
|
|
|
@ -702,7 +702,7 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|
||||||
if (lp->features & XAE_FEATURE_FULL_TX_CSUM) {
|
if (lp->features & XAE_FEATURE_FULL_TX_CSUM) {
|
||||||
/* Tx Full Checksum Offload Enabled */
|
/* Tx Full Checksum Offload Enabled */
|
||||||
cur_p->app0 |= 2;
|
cur_p->app0 |= 2;
|
||||||
} else if (lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) {
|
} else if (lp->features & XAE_FEATURE_PARTIAL_TX_CSUM) {
|
||||||
csum_start_off = skb_transport_offset(skb);
|
csum_start_off = skb_transport_offset(skb);
|
||||||
csum_index_off = csum_start_off + skb->csum_offset;
|
csum_index_off = csum_start_off + skb->csum_offset;
|
||||||
/* Tx Partial Checksum Offload Enabled */
|
/* Tx Partial Checksum Offload Enabled */
|
||||||
|
|
|
@ -2,5 +2,6 @@ config HYPERV_NET
|
||||||
tristate "Microsoft Hyper-V virtual network driver"
|
tristate "Microsoft Hyper-V virtual network driver"
|
||||||
depends on HYPERV
|
depends on HYPERV
|
||||||
select UCS2_STRING
|
select UCS2_STRING
|
||||||
|
select NLS
|
||||||
help
|
help
|
||||||
Select this option to enable the Hyper-V virtual network driver.
|
Select this option to enable the Hyper-V virtual network driver.
|
||||||
|
|
|
@ -2048,9 +2048,6 @@ static int netvsc_vf_join(struct net_device *vf_netdev,
|
||||||
goto upper_link_failed;
|
goto upper_link_failed;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* set slave flag before open to prevent IPv6 addrconf */
|
|
||||||
vf_netdev->flags |= IFF_SLAVE;
|
|
||||||
|
|
||||||
schedule_delayed_work(&ndev_ctx->vf_takeover, VF_TAKEOVER_INT);
|
schedule_delayed_work(&ndev_ctx->vf_takeover, VF_TAKEOVER_INT);
|
||||||
|
|
||||||
call_netdevice_notifiers(NETDEV_JOIN, vf_netdev);
|
call_netdevice_notifiers(NETDEV_JOIN, vf_netdev);
|
||||||
|
@ -2148,16 +2145,18 @@ static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev)
|
||||||
return hv_get_drvdata(ndev_ctx->device_ctx);
|
return hv_get_drvdata(ndev_ctx->device_ctx);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Fallback path to check synthetic vf with
|
/* Fallback path to check synthetic vf with help of mac addr.
|
||||||
* help of mac addr
|
* Because this function can be called before vf_netdev is
|
||||||
|
* initialized (NETDEV_POST_INIT) when its perm_addr has not been copied
|
||||||
|
* from dev_addr, also try to match to its dev_addr.
|
||||||
|
* Note: On Hyper-V and Azure, it's not possible to set a MAC address
|
||||||
|
* on a VF that matches to the MAC of a unrelated NETVSC device.
|
||||||
*/
|
*/
|
||||||
list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) {
|
list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) {
|
||||||
ndev = hv_get_drvdata(ndev_ctx->device_ctx);
|
ndev = hv_get_drvdata(ndev_ctx->device_ctx);
|
||||||
if (ether_addr_equal(vf_netdev->perm_addr, ndev->perm_addr)) {
|
if (ether_addr_equal(vf_netdev->perm_addr, ndev->perm_addr) ||
|
||||||
netdev_notice(vf_netdev,
|
ether_addr_equal(vf_netdev->dev_addr, ndev->perm_addr))
|
||||||
"falling back to mac addr based matching\n");
|
|
||||||
return ndev;
|
return ndev;
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
netdev_notice(vf_netdev,
|
netdev_notice(vf_netdev,
|
||||||
|
@ -2165,6 +2164,19 @@ static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev)
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int netvsc_prepare_bonding(struct net_device *vf_netdev)
|
||||||
|
{
|
||||||
|
struct net_device *ndev;
|
||||||
|
|
||||||
|
ndev = get_netvsc_byslot(vf_netdev);
|
||||||
|
if (!ndev)
|
||||||
|
return NOTIFY_DONE;
|
||||||
|
|
||||||
|
/* set slave flag before open to prevent IPv6 addrconf */
|
||||||
|
vf_netdev->flags |= IFF_SLAVE;
|
||||||
|
return NOTIFY_DONE;
|
||||||
|
}
|
||||||
|
|
||||||
static int netvsc_register_vf(struct net_device *vf_netdev)
|
static int netvsc_register_vf(struct net_device *vf_netdev)
|
||||||
{
|
{
|
||||||
struct net_device_context *net_device_ctx;
|
struct net_device_context *net_device_ctx;
|
||||||
|
@ -2481,6 +2493,8 @@ static int netvsc_netdev_event(struct notifier_block *this,
|
||||||
return NOTIFY_DONE;
|
return NOTIFY_DONE;
|
||||||
|
|
||||||
switch (event) {
|
switch (event) {
|
||||||
|
case NETDEV_POST_INIT:
|
||||||
|
return netvsc_prepare_bonding(event_dev);
|
||||||
case NETDEV_REGISTER:
|
case NETDEV_REGISTER:
|
||||||
return netvsc_register_vf(event_dev);
|
return netvsc_register_vf(event_dev);
|
||||||
case NETDEV_UNREGISTER:
|
case NETDEV_UNREGISTER:
|
||||||
|
@ -2514,12 +2528,17 @@ static int __init netvsc_drv_init(void)
|
||||||
}
|
}
|
||||||
netvsc_ring_bytes = ring_size * PAGE_SIZE;
|
netvsc_ring_bytes = ring_size * PAGE_SIZE;
|
||||||
|
|
||||||
|
register_netdevice_notifier(&netvsc_netdev_notifier);
|
||||||
|
|
||||||
ret = vmbus_driver_register(&netvsc_drv);
|
ret = vmbus_driver_register(&netvsc_drv);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
goto err_vmbus_reg;
|
||||||
|
|
||||||
register_netdevice_notifier(&netvsc_netdev_notifier);
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
err_vmbus_reg:
|
||||||
|
unregister_netdevice_notifier(&netvsc_netdev_notifier);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
MODULE_LICENSE("GPL");
|
MODULE_LICENSE("GPL");
|
||||||
|
|
|
@ -291,8 +291,10 @@ static int __team_options_register(struct team *team,
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
inst_rollback:
|
inst_rollback:
|
||||||
for (i--; i >= 0; i--)
|
for (i--; i >= 0; i--) {
|
||||||
__team_option_inst_del_option(team, dst_opts[i]);
|
__team_option_inst_del_option(team, dst_opts[i]);
|
||||||
|
list_del(&dst_opts[i]->list);
|
||||||
|
}
|
||||||
|
|
||||||
i = option_count;
|
i = option_count;
|
||||||
alloc_rollback:
|
alloc_rollback:
|
||||||
|
|
|
@ -1156,14 +1156,14 @@ static int ax88179_system_resume(struct ax_device *axdev)
|
||||||
#endif
|
#endif
|
||||||
reg16 = AX_PHYPWR_RSTCTL_IPRL;
|
reg16 = AX_PHYPWR_RSTCTL_IPRL;
|
||||||
ax_write_cmd_nopm(axdev, AX_ACCESS_MAC, AX_PHYPWR_RSTCTL, 2, 2, ®16);
|
ax_write_cmd_nopm(axdev, AX_ACCESS_MAC, AX_PHYPWR_RSTCTL, 2, 2, ®16);
|
||||||
msleep(200);
|
msleep(500);
|
||||||
|
|
||||||
ax88179_AutoDetach(axdev, 1);
|
ax88179_AutoDetach(axdev, 1);
|
||||||
|
|
||||||
ax_read_cmd_nopm(axdev, AX_ACCESS_MAC, AX_CLK_SELECT, 1, 1, ®8, 0);
|
ax_read_cmd_nopm(axdev, AX_ACCESS_MAC, AX_CLK_SELECT, 1, 1, ®8, 0);
|
||||||
reg8 |= AX_CLK_SELECT_ACS | AX_CLK_SELECT_BCS;
|
reg8 |= AX_CLK_SELECT_ACS | AX_CLK_SELECT_BCS;
|
||||||
ax_write_cmd_nopm(axdev, AX_ACCESS_MAC, AX_CLK_SELECT, 1, 1, ®8);
|
ax_write_cmd_nopm(axdev, AX_ACCESS_MAC, AX_CLK_SELECT, 1, 1, ®8);
|
||||||
msleep(100);
|
msleep(200);
|
||||||
|
|
||||||
reg16 = AX_RX_CTL_START | AX_RX_CTL_AP |
|
reg16 = AX_RX_CTL_START | AX_RX_CTL_AP |
|
||||||
AX_RX_CTL_AMALL | AX_RX_CTL_AB;
|
AX_RX_CTL_AMALL | AX_RX_CTL_AB;
|
||||||
|
|
|
@ -1250,6 +1250,7 @@ static const struct usb_device_id products[] = {
|
||||||
{QMI_FIXED_INTF(0x19d2, 0x0168, 4)},
|
{QMI_FIXED_INTF(0x19d2, 0x0168, 4)},
|
||||||
{QMI_FIXED_INTF(0x19d2, 0x0176, 3)},
|
{QMI_FIXED_INTF(0x19d2, 0x0176, 3)},
|
||||||
{QMI_FIXED_INTF(0x19d2, 0x0178, 3)},
|
{QMI_FIXED_INTF(0x19d2, 0x0178, 3)},
|
||||||
|
{QMI_FIXED_INTF(0x19d2, 0x0189, 4)}, /* ZTE MF290 */
|
||||||
{QMI_FIXED_INTF(0x19d2, 0x0191, 4)}, /* ZTE EuFi890 */
|
{QMI_FIXED_INTF(0x19d2, 0x0191, 4)}, /* ZTE EuFi890 */
|
||||||
{QMI_FIXED_INTF(0x19d2, 0x0199, 1)}, /* ZTE MF820S */
|
{QMI_FIXED_INTF(0x19d2, 0x0199, 1)}, /* ZTE MF820S */
|
||||||
{QMI_FIXED_INTF(0x19d2, 0x0200, 1)},
|
{QMI_FIXED_INTF(0x19d2, 0x0200, 1)},
|
||||||
|
|
|
@ -2647,6 +2647,8 @@ enum parport_pc_pci_cards {
|
||||||
netmos_9865,
|
netmos_9865,
|
||||||
quatech_sppxp100,
|
quatech_sppxp100,
|
||||||
wch_ch382l,
|
wch_ch382l,
|
||||||
|
brainboxes_uc146,
|
||||||
|
brainboxes_px203,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
|
@ -2710,6 +2712,8 @@ static struct parport_pc_pci {
|
||||||
/* netmos_9865 */ { 1, { { 0, -1 }, } },
|
/* netmos_9865 */ { 1, { { 0, -1 }, } },
|
||||||
/* quatech_sppxp100 */ { 1, { { 0, 1 }, } },
|
/* quatech_sppxp100 */ { 1, { { 0, 1 }, } },
|
||||||
/* wch_ch382l */ { 1, { { 2, -1 }, } },
|
/* wch_ch382l */ { 1, { { 2, -1 }, } },
|
||||||
|
/* brainboxes_uc146 */ { 1, { { 3, -1 }, } },
|
||||||
|
/* brainboxes_px203 */ { 1, { { 0, -1 }, } },
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct pci_device_id parport_pc_pci_tbl[] = {
|
static const struct pci_device_id parport_pc_pci_tbl[] = {
|
||||||
|
@ -2801,6 +2805,23 @@ static const struct pci_device_id parport_pc_pci_tbl[] = {
|
||||||
PCI_ANY_ID, PCI_ANY_ID, 0, 0, quatech_sppxp100 },
|
PCI_ANY_ID, PCI_ANY_ID, 0, 0, quatech_sppxp100 },
|
||||||
/* WCH CH382L PCI-E single parallel port card */
|
/* WCH CH382L PCI-E single parallel port card */
|
||||||
{ 0x1c00, 0x3050, 0x1c00, 0x3050, 0, 0, wch_ch382l },
|
{ 0x1c00, 0x3050, 0x1c00, 0x3050, 0, 0, wch_ch382l },
|
||||||
|
/* Brainboxes IX-500/550 */
|
||||||
|
{ PCI_VENDOR_ID_INTASHIELD, 0x402a,
|
||||||
|
PCI_ANY_ID, PCI_ANY_ID, 0, 0, oxsemi_pcie_pport },
|
||||||
|
/* Brainboxes UC-146/UC-157 */
|
||||||
|
{ PCI_VENDOR_ID_INTASHIELD, 0x0be1,
|
||||||
|
PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc146 },
|
||||||
|
{ PCI_VENDOR_ID_INTASHIELD, 0x0be2,
|
||||||
|
PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_uc146 },
|
||||||
|
/* Brainboxes PX-146/PX-257 */
|
||||||
|
{ PCI_VENDOR_ID_INTASHIELD, 0x401c,
|
||||||
|
PCI_ANY_ID, PCI_ANY_ID, 0, 0, oxsemi_pcie_pport },
|
||||||
|
/* Brainboxes PX-203 */
|
||||||
|
{ PCI_VENDOR_ID_INTASHIELD, 0x4007,
|
||||||
|
PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_px203 },
|
||||||
|
/* Brainboxes PX-475 */
|
||||||
|
{ PCI_VENDOR_ID_INTASHIELD, 0x401f,
|
||||||
|
PCI_ANY_ID, PCI_ANY_ID, 0, 0, oxsemi_pcie_pport },
|
||||||
{ 0, } /* terminate list */
|
{ 0, } /* terminate list */
|
||||||
};
|
};
|
||||||
MODULE_DEVICE_TABLE(pci, parport_pc_pci_tbl);
|
MODULE_DEVICE_TABLE(pci, parport_pc_pci_tbl);
|
||||||
|
|
|
@ -510,15 +510,12 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge)
|
||||||
if (pass && dev->subordinate) {
|
if (pass && dev->subordinate) {
|
||||||
check_hotplug_bridge(slot, dev);
|
check_hotplug_bridge(slot, dev);
|
||||||
pcibios_resource_survey_bus(dev->subordinate);
|
pcibios_resource_survey_bus(dev->subordinate);
|
||||||
if (pci_is_root_bus(bus))
|
__pci_bus_size_bridges(dev->subordinate,
|
||||||
__pci_bus_size_bridges(dev->subordinate, &add_list);
|
&add_list);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (pci_is_root_bus(bus))
|
__pci_bus_assign_resources(bus, &add_list, NULL);
|
||||||
__pci_bus_assign_resources(bus, &add_list, NULL);
|
|
||||||
else
|
|
||||||
pci_assign_unassigned_bridge_resources(bus->self);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
acpiphp_sanitize_bus(bus);
|
acpiphp_sanitize_bus(bus);
|
||||||
|
|
|
@ -1224,17 +1224,17 @@ EXPORT_SYMBOL_GPL(pinctrl_lookup_state);
|
||||||
static int pinctrl_commit_state(struct pinctrl *p, struct pinctrl_state *state)
|
static int pinctrl_commit_state(struct pinctrl *p, struct pinctrl_state *state)
|
||||||
{
|
{
|
||||||
struct pinctrl_setting *setting, *setting2;
|
struct pinctrl_setting *setting, *setting2;
|
||||||
struct pinctrl_state *old_state = p->state;
|
struct pinctrl_state *old_state = READ_ONCE(p->state);
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (p->state) {
|
if (old_state) {
|
||||||
/*
|
/*
|
||||||
* For each pinmux setting in the old state, forget SW's record
|
* For each pinmux setting in the old state, forget SW's record
|
||||||
* of mux owner for that pingroup. Any pingroups which are
|
* of mux owner for that pingroup. Any pingroups which are
|
||||||
* still owned by the new state will be re-acquired by the call
|
* still owned by the new state will be re-acquired by the call
|
||||||
* to pinmux_enable_setting() in the loop below.
|
* to pinmux_enable_setting() in the loop below.
|
||||||
*/
|
*/
|
||||||
list_for_each_entry(setting, &p->state->settings, node) {
|
list_for_each_entry(setting, &old_state->settings, node) {
|
||||||
if (setting->type != PIN_MAP_TYPE_MUX_GROUP)
|
if (setting->type != PIN_MAP_TYPE_MUX_GROUP)
|
||||||
continue;
|
continue;
|
||||||
pinmux_disable_setting(setting);
|
pinmux_disable_setting(setting);
|
||||||
|
|
|
@ -939,6 +939,13 @@ static const struct of_device_id atmel_pctrl_of_match[] = {
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This lock class allows to tell lockdep that parent IRQ and children IRQ do
|
||||||
|
* not share the same class so it does not raise false positive
|
||||||
|
*/
|
||||||
|
static struct lock_class_key atmel_lock_key;
|
||||||
|
static struct lock_class_key atmel_request_key;
|
||||||
|
|
||||||
static int atmel_pinctrl_probe(struct platform_device *pdev)
|
static int atmel_pinctrl_probe(struct platform_device *pdev)
|
||||||
{
|
{
|
||||||
struct device *dev = &pdev->dev;
|
struct device *dev = &pdev->dev;
|
||||||
|
@ -1089,6 +1096,7 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
|
||||||
irq_set_chip_and_handler(irq, &atmel_gpio_irq_chip,
|
irq_set_chip_and_handler(irq, &atmel_gpio_irq_chip,
|
||||||
handle_simple_irq);
|
handle_simple_irq);
|
||||||
irq_set_chip_data(irq, atmel_pioctrl);
|
irq_set_chip_data(irq, atmel_pioctrl);
|
||||||
|
irq_set_lockdep_class(irq, &atmel_lock_key, &atmel_request_key);
|
||||||
dev_dbg(dev,
|
dev_dbg(dev,
|
||||||
"atmel gpio irq domain: hwirq: %d, linux irq: %d\n",
|
"atmel gpio irq domain: hwirq: %d, linux irq: %d\n",
|
||||||
i, irq);
|
i, irq);
|
||||||
|
|
|
@ -110,7 +110,7 @@ static const struct telemetry_core_ops telm_defpltops = {
|
||||||
/**
|
/**
|
||||||
* telemetry_update_events() - Update telemetry Configuration
|
* telemetry_update_events() - Update telemetry Configuration
|
||||||
* @pss_evtconfig: PSS related config. No change if num_evts = 0.
|
* @pss_evtconfig: PSS related config. No change if num_evts = 0.
|
||||||
* @pss_evtconfig: IOSS related config. No change if num_evts = 0.
|
* @ioss_evtconfig: IOSS related config. No change if num_evts = 0.
|
||||||
*
|
*
|
||||||
* This API updates the IOSS & PSS Telemetry configuration. Old config
|
* This API updates the IOSS & PSS Telemetry configuration. Old config
|
||||||
* is overwritten. Call telemetry_reset_events when logging is over
|
* is overwritten. Call telemetry_reset_events when logging is over
|
||||||
|
@ -184,7 +184,7 @@ EXPORT_SYMBOL_GPL(telemetry_reset_events);
|
||||||
/**
|
/**
|
||||||
* telemetry_get_eventconfig() - Returns the pss and ioss events enabled
|
* telemetry_get_eventconfig() - Returns the pss and ioss events enabled
|
||||||
* @pss_evtconfig: Pointer to PSS related configuration.
|
* @pss_evtconfig: Pointer to PSS related configuration.
|
||||||
* @pss_evtconfig: Pointer to IOSS related configuration.
|
* @ioss_evtconfig: Pointer to IOSS related configuration.
|
||||||
* @pss_len: Number of u32 elements allocated for pss_evtconfig array
|
* @pss_len: Number of u32 elements allocated for pss_evtconfig array
|
||||||
* @ioss_len: Number of u32 elements allocated for ioss_evtconfig array
|
* @ioss_len: Number of u32 elements allocated for ioss_evtconfig array
|
||||||
*
|
*
|
||||||
|
|
|
@ -459,6 +459,9 @@ static void __reset_control_put_internal(struct reset_control *rstc)
|
||||||
{
|
{
|
||||||
lockdep_assert_held(&reset_list_mutex);
|
lockdep_assert_held(&reset_list_mutex);
|
||||||
|
|
||||||
|
if (IS_ERR_OR_NULL(rstc))
|
||||||
|
return;
|
||||||
|
|
||||||
kref_put(&rstc->refcnt, __reset_control_release);
|
kref_put(&rstc->refcnt, __reset_control_release);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue