mhi: add snapshot for MHI driver stack

This change ports MHI driver stack and it's dependencies
from msm-4.14 kernel. All the files are as is from msm-4.14.
Additional changes were made to pass check_patch errors.

This MHI driver stack snapshot is taken as of msm-4.14

'commit: <b5ca255f11c5>
("Merge "regulator: qpnp-lcdb: Correct the get_voltage calculation"")'

CRs-Fixed: 2313588
Change-Id: I17422e80f437eb2aa42a98dd5d451b42e45646db
Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
This commit is contained in:
Sujeev Dias 2018-09-11 15:55:33 -07:00
parent 25bc4df623
commit bebf2ba49d
22 changed files with 10065 additions and 0 deletions

276
Documentation/mhi.txt Normal file
View file

@ -0,0 +1,276 @@
Overview of Linux kernel MHI support
====================================
Modem-Host Interface (MHI)
=========================
MHI used by the host to control and communicate with modem over high speed
peripheral bus. Even though MHI can be easily adapt to any peripheral buses,
primarily used with PCIe based devices. The host has one or more PCIe root
ports connected to modem device. The host has limited access to device memory
space, including register configuration and control the device operation.
Data transfers are invoked from the device.
All data structures used by MHI are in the host system memory. Using PCIe
interface, the device accesses those data structures. MHI data structures and
data buffers in the host system memory regions are mapped for device.
Memory spaces
-------------
PCIe Configurations : Used for enumeration and resource management, such as
interrupt and base addresses. This is done by mhi control driver.
MMIO
----
MHI MMIO : Memory mapped IO consists of set of registers in the device hardware,
which are mapped to the host memory space through PCIe base address register
(BAR)
MHI control registers : Access to MHI configurations registers
(struct mhi_controller.regs).
MHI BHI register: Boot host interface registers (struct mhi_controller.bhi) used
for firmware download before MHI initialization.
Channel db array : Doorbell registers (struct mhi_chan.tre_ring.db_addr) used by
host to notify device there is new work to do.
Event db array : Associated with event context array
(struct mhi_event.ring.db_addr), host uses to notify device free events are
available.
Data structures
---------------
Host memory : Directly accessed by the host to manage the MHI data structures
and buffers. The device accesses the host memory over the PCIe interface.
Channel context array : All channel configurations are organized in channel
context data array.
struct __packed mhi_chan_ctxt;
struct mhi_ctxt.chan_ctxt;
Transfer rings : Used by host to schedule work items for a channel and organized
as a circular queue of transfer descriptors (TD).
struct __packed mhi_tre;
struct mhi_chan.tre_ring;
Event context array : All event configurations are organized in event context
data array.
struct mhi_ctxt.er_ctxt;
struct __packed mhi_event_ctxt;
Event rings: Used by device to send completion and state transition messages to
host
struct mhi_event.ring;
struct __packed mhi_tre;
Command context array: All command configurations are organized in command
context data array.
struct __packed mhi_cmd_ctxt;
struct mhi_ctxt.cmd_ctxt;
Command rings: Used by host to send MHI commands to device
struct __packed mhi_tre;
struct mhi_cmd.ring;
Transfer rings
--------------
MHI channels are logical, unidirectional data pipes between host and device.
Each channel associated with a single transfer ring. The data direction can be
either inbound (device to host) or outbound (host to device). Transfer
descriptors are managed by using transfer rings, which are defined for each
channel between device and host and resides in the host memory.
Transfer ring Pointer: Transfer Ring Array
[Read Pointer (RP)] ----------->[Ring Element] } TD
[Write Pointer (WP)]- [Ring Element]
- [Ring Element]
--------->[Ring Element]
[Ring Element]
1. Host allocate memory for transfer ring
2. Host sets base, read pointer, write pointer in corresponding channel context
3. Ring is considered empty when RP == WP
4. Ring is considered full when WP + 1 == RP
4. RP indicates the next element to be serviced by device
4. When host new buffer to send, host update the Ring element with buffer
information
5. Host increment the WP to next element
6. Ring the associated channel DB.
Event rings
-----------
Events from the device to host are organized in event rings and defined in event
descriptors. Event rings are array of EDs that resides in the host memory.
Transfer ring Pointer: Event Ring Array
[Read Pointer (RP)] ----------->[Ring Element] } ED
[Write Pointer (WP)]- [Ring Element]
- [Ring Element]
--------->[Ring Element]
[Ring Element]
1. Host allocate memory for event ring
2. Host sets base, read pointer, write pointer in corresponding channel context
3. Both host and device has local copy of RP, WP
3. Ring is considered empty (no events to service) when WP + 1 == RP
4. Ring is full of events when RP == WP
4. RP - 1 = last event device programmed
4. When there is a new event device need to send, device update ED pointed by RP
5. Device increment RP to next element
6. Device trigger and interrupt
Example Operation for data transfer:
1. Host prepare TD with buffer information
2. Host increment Chan[id].ctxt.WP
3. Host ring channel DB register
4. Device wakes up process the TD
5. Device generate a completion event for that TD by updating ED
6. Device increment Event[id].ctxt.RP
7. Device trigger MSI to wake host
8. Host wakes up and check event ring for completion event
9. Host update the Event[i].ctxt.WP to indicate processed of completion event.
Time sync
---------
To synchronize two applications between host and external modem, MHI provide
native support to get external modems free running timer value in a fast
reliable method. MHI clients do not need to create client specific methods to
get modem time.
When client requests modem time, MHI host will automatically capture host time
at that moment so clients are able to do accurate drift adjustment.
Example:
Client request time @ time T1
Host Time: Tx
Modem Time: Ty
Client request time @ time T2
Host Time: Txx
Modem Time: Tyy
Then drift is:
Tyy - Ty + <drift> == Txx - Tx
Clients are free to implement their own drift algorithms, what MHI host provide
is a way to accurately correlate host time with external modem time.
To avoid link level latencies, controller must support capabilities to disable
any link level latency.
During Time capture host will:
1. Capture host time
2. Trigger doorbell to capture modem time
It's important time between Step 2 to Step 1 is deterministic as possible.
Therefore, MHI host will:
1. Disable any MHI related to low power modes.
2. Disable preemption
3. Request bus master to disable any link level latencies. Controller
should disable all low power modes such as L0s, L1, L1ss.
MHI States
----------
enum MHI_STATE {
MHI_STATE_RESET : MHI is in reset state, POR state. Host is not allowed to
access device MMIO register space.
MHI_STATE_READY : Device is ready for initialization. Host can start MHI
initialization by programming MMIO
MHI_STATE_M0 : MHI is in fully active state, data transfer is active
MHI_STATE_M1 : Device in a suspended state
MHI_STATE_M2 : MHI in low power mode, device may enter lower power mode.
MHI_STATE_M3 : Both host and device in suspended state. PCIe link is not
accessible to device.
MHI Initialization
------------------
1. After system boots, the device is enumerated over PCIe interface
2. Host allocate MHI context for event, channel and command arrays
3. Initialize context array, and prepare interrupts
3. Host waits until device enter READY state
4. Program MHI MMIO registers and set device into MHI_M0 state
5. Wait for device to enter M0 state
Linux Software Architecture
===========================
MHI Controller
--------------
MHI controller is also the MHI bus master. In charge of managing the physical
link between host and device. Not involved in actual data transfer. At least
for PCIe based buses, for other type of bus, we can expand to add support.
Roles:
1. Turn on PCIe bus and configure the link
2. Configure MSI, SMMU, and IOMEM
3. Allocate struct mhi_controller and register with MHI bus framework
2. Initiate power on and shutdown sequence
3. Initiate suspend and resume
Usage
-----
1. Allocate control data structure by calling mhi_alloc_controller()
2. Initialize mhi_controller with all the known information such as:
- Device Topology
- IOMMU window
- IOMEM mapping
- Device to use for memory allocation, and of_node with DT configuration
- Configure asynchronous callback functions
3. Register MHI controller with MHI bus framework by calling
of_register_mhi_controller()
After successfully registering controller can initiate any of these power modes:
1. Power up sequence
- mhi_prepare_for_power_up()
- mhi_async_power_up()
- mhi_sync_power_up()
2. Power down sequence
- mhi_power_down()
- mhi_unprepare_after_power_down()
3. Initiate suspend
- mhi_pm_suspend()
4. Initiate resume
- mhi_pm_resume()
MHI Devices
-----------
Logical device that bind to maximum of two physical MHI channels. Once MHI is in
powered on state, each supported channel by controller will be allocated as a
mhi_device.
Each supported device would be enumerated under
/sys/bus/mhi/devices/
struct mhi_device;
MHI Driver
----------
Each MHI driver can bind to one or more MHI devices. MHI host driver will bind
mhi_device to mhi_driver.
All registered drivers are visible under
/sys/bus/mhi/drivers/
struct mhi_driver;
Usage
-----
1. Register driver using mhi_driver_register
2. Before sending data, prepare device for transfer by calling
mhi_prepare_for_transfer
3. Initiate data transfer by calling mhi_queue_transfer
4. After finish, call mhi_unprepare_from_transfer to end data transfer

View file

@ -181,6 +181,25 @@ config DA8XX_MSTPRI
configuration. Allows to adjust the priorities of all master configuration. Allows to adjust the priorities of all master
peripherals. peripherals.
config MHI_BUS
tristate "Modem Host Interface"
help
MHI Host Interface is a communication protocol to be used by the host
to control and communcate with modem over a high speed peripheral bus.
Enabling this module will allow host to communicate with external
devices that support MHI protocol.
config MHI_DEBUG
bool "MHI debug support"
depends on MHI_BUS
help
Say yes here to enable debugging support in the MHI transport
and individual MHI client drivers. This option will impact
throughput as individual MHI packets and state transitions
will be logged.
source "drivers/bus/fsl-mc/Kconfig" source "drivers/bus/fsl-mc/Kconfig"
source "drivers/bus/mhi/controllers/Kconfig"
source "drivers/bus/mhi/devices/Kconfig"
endmenu endmenu

View file

@ -32,3 +32,4 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
obj-$(CONFIG_MHI_BUS) += mhi/

9
drivers/bus/mhi/Makefile Normal file
View file

@ -0,0 +1,9 @@
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the MHI stack
#
# core layer
obj-y += core/
obj-y += controllers/
obj-y += devices/

View file

@ -0,0 +1,15 @@
# SPDX-License-Identifier: GPL-2.0
menu "MHI controllers"
config MHI_QCOM
tristate "MHI QCOM"
depends on MHI_BUS
help
If you say yes to this option, MHI bus support for QCOM modem chipsets
will be enabled. QCOM PCIe based modems uses MHI as the communication
protocol. MHI control driver is the bus master for such modems. As the
bus master driver, it oversees power management operations such as
suspend, resume, powering on and off the device.
endmenu

View file

@ -0,0 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_MHI_QCOM) += mhi_qcom.o mhi_arch_qcom.o

View file

@ -0,0 +1,555 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018, The Linux Foundation. All rights reserved.*/
#include <asm/dma-iommu.h>
#include <linux/async.h>
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/esoc_client.h>
#include <linux/list.h>
#include <linux/of.h>
#include <linux/memblock.h>
#include <linux/module.h>
#include <linux/msm-bus.h>
#include <linux/msm_pcie.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/mhi.h>
#include "mhi_qcom.h"
struct arch_info {
struct mhi_dev *mhi_dev;
struct esoc_desc *esoc_client;
struct esoc_client_hook esoc_ops;
struct msm_bus_scale_pdata *msm_bus_pdata;
u32 bus_client;
struct msm_pcie_register_event pcie_reg_event;
struct pci_saved_state *pcie_state;
struct pci_saved_state *ref_pcie_state;
struct dma_iommu_mapping *mapping;
};
struct mhi_bl_info {
struct mhi_device *mhi_device;
async_cookie_t cookie;
void *ipc_log;
};
/* ipc log markings */
#define DLOG "Dev->Host: "
#define HLOG "Host: "
#ifdef CONFIG_MHI_DEBUG
#define MHI_IPC_LOG_PAGES (100)
enum MHI_DEBUG_LEVEL mhi_ipc_log_lvl = MHI_MSG_LVL_VERBOSE;
#else
#define MHI_IPC_LOG_PAGES (10)
enum MHI_DEBUG_LEVEL mhi_ipc_log_lvl = MHI_MSG_LVL_ERROR;
#endif
extern void pm_runtime_init(struct device *dev);
static int mhi_arch_set_bus_request(struct mhi_controller *mhi_cntrl, int index)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
MHI_LOG("Setting bus request to index %d\n", index);
return msm_bus_scale_client_update_request(arch_info->bus_client,
index);
}
static void mhi_arch_pci_link_state_cb(struct msm_pcie_notify *notify)
{
struct mhi_controller *mhi_cntrl = notify->data;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
if (notify->event == MSM_PCIE_EVENT_WAKEUP) {
MHI_LOG("Received MSM_PCIE_EVENT_WAKE signal\n");
/* bring link out of d3cold */
if (mhi_dev->powered_on) {
pm_runtime_get(&pci_dev->dev);
pm_runtime_put_noidle(&pci_dev->dev);
}
}
}
static int mhi_arch_esoc_ops_power_on(void *priv, bool mdm_state)
{
struct mhi_controller *mhi_cntrl = priv;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
struct arch_info *arch_info = mhi_dev->arch_info;
int ret;
mutex_lock(&mhi_cntrl->pm_mutex);
if (mhi_dev->powered_on) {
MHI_LOG("MHI still in active state\n");
mutex_unlock(&mhi_cntrl->pm_mutex);
return 0;
}
MHI_LOG("Enter\n");
/* reset rpm state */
pm_runtime_init(&pci_dev->dev);
pm_runtime_enable(&pci_dev->dev);
mutex_unlock(&mhi_cntrl->pm_mutex);
pm_runtime_forbid(&pci_dev->dev);
ret = pm_runtime_get_sync(&pci_dev->dev);
if (ret < 0) {
MHI_ERR("Error with rpm resume, ret:%d\n", ret);
return ret;
}
/* re-start the link & recover default cfg states */
ret = msm_pcie_pm_control(MSM_PCIE_RESUME, pci_dev->bus->number,
pci_dev, NULL, 0);
if (ret) {
MHI_ERR("Failed to resume pcie bus ret %d\n", ret);
return ret;
}
pci_load_saved_state(pci_dev, arch_info->ref_pcie_state);
return mhi_pci_probe(pci_dev, NULL);
}
void mhi_arch_esoc_ops_power_off(void *priv, bool mdm_state)
{
struct mhi_controller *mhi_cntrl = priv;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
MHI_LOG("Enter: mdm_crashed:%d\n", mdm_state);
if (!mhi_dev->powered_on) {
MHI_LOG("Not in active state\n");
return;
}
MHI_LOG("Triggering shutdown process\n");
mhi_power_down(mhi_cntrl, !mdm_state);
/* turn the link off */
mhi_deinit_pci_dev(mhi_cntrl);
mhi_arch_link_off(mhi_cntrl, false);
mhi_arch_iommu_deinit(mhi_cntrl);
mhi_arch_pcie_deinit(mhi_cntrl);
mhi_dev->powered_on = false;
}
static void mhi_bl_dl_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
struct mhi_bl_info *mhi_bl_info = mhi_device_get_devdata(mhi_dev);
char *buf = mhi_result->buf_addr;
/* force a null at last character */
buf[mhi_result->bytes_xferd - 1] = 0;
ipc_log_string(mhi_bl_info->ipc_log, "%s %s", DLOG, buf);
}
static void mhi_bl_dummy_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
}
static void mhi_bl_remove(struct mhi_device *mhi_dev)
{
struct mhi_bl_info *mhi_bl_info = mhi_device_get_devdata(mhi_dev);
ipc_log_string(mhi_bl_info->ipc_log, HLOG "Received Remove notif.\n");
/* wait for boot monitor to exit */
async_synchronize_cookie(mhi_bl_info->cookie + 1);
}
static void mhi_bl_boot_monitor(void *data, async_cookie_t cookie)
{
struct mhi_bl_info *mhi_bl_info = data;
struct mhi_device *mhi_device = mhi_bl_info->mhi_device;
struct mhi_controller *mhi_cntrl = mhi_device->mhi_cntrl;
/* 15 sec timeout for booting device */
const u32 timeout = msecs_to_jiffies(15000);
/* wait for device to enter boot stage */
wait_event_timeout(mhi_cntrl->state_event, mhi_cntrl->ee == MHI_EE_AMSS
|| mhi_cntrl->ee == MHI_EE_DISABLE_TRANSITION,
timeout);
if (mhi_cntrl->ee == MHI_EE_AMSS) {
ipc_log_string(mhi_bl_info->ipc_log, HLOG
"Device successfully booted to mission mode\n");
mhi_unprepare_from_transfer(mhi_device);
} else {
ipc_log_string(mhi_bl_info->ipc_log, HLOG
"Device failed to boot to mission mode, ee = %s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee));
}
}
static int mhi_bl_probe(struct mhi_device *mhi_dev,
const struct mhi_device_id *id)
{
char node_name[32];
struct mhi_bl_info *mhi_bl_info;
mhi_bl_info = devm_kzalloc(&mhi_dev->dev, sizeof(*mhi_bl_info),
GFP_KERNEL);
if (!mhi_bl_info)
return -ENOMEM;
snprintf(node_name, sizeof(node_name), "mhi_bl_%04x_%02u.%02u.%02u",
mhi_dev->dev_id, mhi_dev->domain, mhi_dev->bus, mhi_dev->slot);
mhi_bl_info->ipc_log = ipc_log_context_create(MHI_IPC_LOG_PAGES,
node_name, 0);
if (!mhi_bl_info->ipc_log)
return -EINVAL;
mhi_bl_info->mhi_device = mhi_dev;
mhi_device_set_devdata(mhi_dev, mhi_bl_info);
ipc_log_string(mhi_bl_info->ipc_log, HLOG
"Entered SBL, Session ID:0x%x\n",
mhi_dev->mhi_cntrl->session_id);
/* start a thread to monitor entering mission mode */
mhi_bl_info->cookie = async_schedule(mhi_bl_boot_monitor, mhi_bl_info);
return 0;
}
static const struct mhi_device_id mhi_bl_match_table[] = {
{ .chan = "BL" },
{},
};
static struct mhi_driver mhi_bl_driver = {
.id_table = mhi_bl_match_table,
.remove = mhi_bl_remove,
.probe = mhi_bl_probe,
.ul_xfer_cb = mhi_bl_dummy_cb,
.dl_xfer_cb = mhi_bl_dl_cb,
.driver = {
.name = "MHI_BL",
.owner = THIS_MODULE,
},
};
int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
char node[32];
int ret;
if (!arch_info) {
struct msm_pcie_register_event *reg_event;
arch_info = devm_kzalloc(&mhi_dev->pci_dev->dev,
sizeof(*arch_info), GFP_KERNEL);
if (!arch_info)
return -ENOMEM;
mhi_dev->arch_info = arch_info;
snprintf(node, sizeof(node), "mhi_%04x_%02u.%02u.%02u",
mhi_cntrl->dev_id, mhi_cntrl->domain, mhi_cntrl->bus,
mhi_cntrl->slot);
mhi_cntrl->log_buf = ipc_log_context_create(MHI_IPC_LOG_PAGES,
node, 0);
mhi_cntrl->log_lvl = mhi_ipc_log_lvl;
/* register bus scale */
arch_info->msm_bus_pdata = msm_bus_cl_get_pdata_from_dev(
&mhi_dev->pci_dev->dev);
if (!arch_info->msm_bus_pdata)
return -EINVAL;
arch_info->bus_client = msm_bus_scale_register_client(
arch_info->msm_bus_pdata);
if (!arch_info->bus_client)
return -EINVAL;
/* register with pcie rc for WAKE# events */
reg_event = &arch_info->pcie_reg_event;
reg_event->events = MSM_PCIE_EVENT_WAKEUP;
reg_event->user = mhi_dev->pci_dev;
reg_event->callback = mhi_arch_pci_link_state_cb;
reg_event->notify.data = mhi_cntrl;
ret = msm_pcie_register_event(reg_event);
if (ret)
MHI_LOG("Failed to reg. for link up notification\n");
arch_info->esoc_client = devm_register_esoc_client(
&mhi_dev->pci_dev->dev, "mdm");
if (IS_ERR_OR_NULL(arch_info->esoc_client)) {
MHI_ERR("Failed to register esoc client\n");
} else {
/* register for power on/off hooks */
struct esoc_client_hook *esoc_ops =
&arch_info->esoc_ops;
esoc_ops->priv = mhi_cntrl;
esoc_ops->prio = ESOC_MHI_HOOK;
esoc_ops->esoc_link_power_on =
mhi_arch_esoc_ops_power_on;
esoc_ops->esoc_link_power_off =
mhi_arch_esoc_ops_power_off;
ret = esoc_register_client_hook(arch_info->esoc_client,
esoc_ops);
if (ret)
MHI_ERR("Failed to register esoc ops\n");
}
/* save reference state for pcie config space */
arch_info->ref_pcie_state = pci_store_saved_state(
mhi_dev->pci_dev);
mhi_driver_register(&mhi_bl_driver);
}
return mhi_arch_set_bus_request(mhi_cntrl, 1);
}
void mhi_arch_pcie_deinit(struct mhi_controller *mhi_cntrl)
{
mhi_arch_set_bus_request(mhi_cntrl, 0);
}
static struct dma_iommu_mapping *mhi_arch_create_iommu_mapping(
struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
dma_addr_t base;
size_t size;
/*
* If S1_BYPASS enabled then iommu space is not used, however framework
* still require clients to create a mapping space before attaching. So
* set to smallest size required by iommu framework.
*/
if (mhi_dev->smmu_cfg & MHI_SMMU_S1_BYPASS) {
base = 0;
size = PAGE_SIZE;
} else {
base = mhi_dev->iova_start;
size = (mhi_dev->iova_stop - base) + 1;
}
MHI_LOG("Create iommu mapping of base:%pad size:%zu\n",
&base, size);
return arm_iommu_create_mapping(&pci_bus_type, base, size);
}
static int mhi_arch_dma_mask(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
u32 smmu_cfg = mhi_dev->smmu_cfg;
int mask = 0;
if (!smmu_cfg) {
mask = 64;
} else {
unsigned long size = mhi_dev->iova_stop + 1;
/* for S1 bypass, iova not used set to max */
mask = (smmu_cfg & MHI_SMMU_S1_BYPASS) ?
64 : find_last_bit(&size, 64);
}
return dma_set_mask_and_coherent(mhi_cntrl->dev, DMA_BIT_MASK(mask));
}
int mhi_arch_iommu_init(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
u32 smmu_config = mhi_dev->smmu_cfg;
struct dma_iommu_mapping *mapping = NULL;
int ret;
if (smmu_config) {
mapping = mhi_arch_create_iommu_mapping(mhi_cntrl);
if (IS_ERR(mapping)) {
MHI_ERR("Failed to create iommu mapping\n");
return PTR_ERR(mapping);
}
}
if (smmu_config & MHI_SMMU_S1_BYPASS) {
int s1_bypass = 1;
ret = iommu_domain_set_attr(mapping->domain,
DOMAIN_ATTR_S1_BYPASS, &s1_bypass);
if (ret) {
MHI_ERR("Failed to set attribute S1_BYPASS\n");
goto release_mapping;
}
}
if (smmu_config & MHI_SMMU_FAST) {
int fast_map = 1;
ret = iommu_domain_set_attr(mapping->domain, DOMAIN_ATTR_FAST,
&fast_map);
if (ret) {
MHI_ERR("Failed to set attribute FAST_MAP\n");
goto release_mapping;
}
}
if (smmu_config & MHI_SMMU_ATOMIC) {
int atomic = 1;
ret = iommu_domain_set_attr(mapping->domain, DOMAIN_ATTR_ATOMIC,
&atomic);
if (ret) {
MHI_ERR("Failed to set attribute ATOMIC\n");
goto release_mapping;
}
}
if (smmu_config & MHI_SMMU_FORCE_COHERENT) {
int force_coherent = 1;
ret = iommu_domain_set_attr(mapping->domain,
DOMAIN_ATTR_PAGE_TABLE_FORCE_COHERENT,
&force_coherent);
if (ret) {
MHI_ERR("Failed to set attribute FORCE_COHERENT\n");
goto release_mapping;
}
}
if (smmu_config) {
ret = arm_iommu_attach_device(&mhi_dev->pci_dev->dev, mapping);
if (ret) {
MHI_ERR("Error attach device, ret:%d\n", ret);
goto release_mapping;
}
arch_info->mapping = mapping;
}
mhi_cntrl->dev = &mhi_dev->pci_dev->dev;
ret = mhi_arch_dma_mask(mhi_cntrl);
if (ret) {
MHI_ERR("Error setting dma mask, ret:%d\n", ret);
goto release_device;
}
return 0;
release_device:
arm_iommu_detach_device(mhi_cntrl->dev);
release_mapping:
arm_iommu_release_mapping(mapping);
return ret;
}
void mhi_arch_iommu_deinit(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
struct dma_iommu_mapping *mapping = arch_info->mapping;
if (mapping) {
arm_iommu_detach_device(mhi_cntrl->dev);
arm_iommu_release_mapping(mapping);
}
arch_info->mapping = NULL;
mhi_cntrl->dev = NULL;
}
int mhi_arch_link_off(struct mhi_controller *mhi_cntrl, bool graceful)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
struct pci_dev *pci_dev = mhi_dev->pci_dev;
int ret;
MHI_LOG("Entered\n");
if (graceful) {
ret = pci_save_state(mhi_dev->pci_dev);
if (ret) {
MHI_ERR("Failed with pci_save_state, ret:%d\n", ret);
return ret;
}
arch_info->pcie_state = pci_store_saved_state(pci_dev);
pci_disable_device(pci_dev);
ret = pci_set_power_state(pci_dev, PCI_D3hot);
if (ret) {
MHI_ERR("Failed to set D3hot, ret:%d\n", ret);
return ret;
}
}
/* release the resources */
msm_pcie_pm_control(MSM_PCIE_SUSPEND, mhi_cntrl->bus, pci_dev, NULL, 0);
mhi_arch_set_bus_request(mhi_cntrl, 0);
MHI_LOG("Exited\n");
return 0;
}
int mhi_arch_link_on(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct arch_info *arch_info = mhi_dev->arch_info;
struct pci_dev *pci_dev = mhi_dev->pci_dev;
int ret;
MHI_LOG("Entered\n");
/* request resources and establish link trainning */
ret = mhi_arch_set_bus_request(mhi_cntrl, 1);
if (ret)
MHI_LOG("Could not set bus frequency, ret:%d\n", ret);
ret = msm_pcie_pm_control(MSM_PCIE_RESUME, mhi_cntrl->bus, pci_dev,
NULL, 0);
if (ret) {
MHI_ERR("Link training failed, ret:%d\n", ret);
return ret;
}
ret = pci_set_power_state(pci_dev, PCI_D0);
if (ret) {
MHI_ERR("Failed to set PCI_D0 state, ret:%d\n", ret);
return ret;
}
ret = pci_enable_device(pci_dev);
if (ret) {
MHI_ERR("Failed to enable device, ret:%d\n", ret);
return ret;
}
ret = pci_load_and_free_saved_state(pci_dev, &arch_info->pcie_state);
if (ret)
MHI_LOG("Failed to load saved cfg state\n");
pci_restore_state(pci_dev);
pci_set_master(pci_dev);
MHI_LOG("Exited\n");
return 0;
}

View file

@ -0,0 +1,598 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018, The Linux Foundation. All rights reserved.*/
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/list.h>
#include <linux/of.h>
#include <linux/memblock.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/mhi.h>
#include "mhi_qcom.h"
struct firmware_info {
unsigned int dev_id;
const char *fw_image;
const char *edl_image;
};
static const struct firmware_info firmware_table[] = {
{.dev_id = 0x306, .fw_image = "sbl1.mbn"},
{.dev_id = 0x305, .fw_image = "sdx50m/sbl1.mbn"},
{.dev_id = 0x304, .fw_image = "sbl.mbn", .edl_image = "edl.mbn"},
/* default, set to debug.mbn */
{.fw_image = "debug.mbn"},
};
static int debug_mode;
module_param_named(debug_mode, debug_mode, int, 0644);
void mhi_deinit_pci_dev(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
pci_free_irq_vectors(pci_dev);
kfree(mhi_cntrl->irq);
mhi_cntrl->irq = NULL;
iounmap(mhi_cntrl->regs);
mhi_cntrl->regs = NULL;
pci_clear_master(pci_dev);
pci_release_region(pci_dev, mhi_dev->resn);
pci_disable_device(pci_dev);
}
static int mhi_init_pci_dev(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
struct pci_dev *pci_dev = mhi_dev->pci_dev;
int ret;
resource_size_t start, len;
int i;
mhi_dev->resn = MHI_PCI_BAR_NUM;
ret = pci_assign_resource(pci_dev, mhi_dev->resn);
if (ret) {
MHI_ERR("Error assign pci resources, ret:%d\n", ret);
return ret;
}
ret = pci_enable_device(pci_dev);
if (ret) {
MHI_ERR("Error enabling device, ret:%d\n", ret);
goto error_enable_device;
}
ret = pci_request_region(pci_dev, mhi_dev->resn, "mhi");
if (ret) {
MHI_ERR("Error pci_request_region, ret:%d\n", ret);
goto error_request_region;
}
pci_set_master(pci_dev);
start = pci_resource_start(pci_dev, mhi_dev->resn);
len = pci_resource_len(pci_dev, mhi_dev->resn);
mhi_cntrl->regs = ioremap_nocache(start, len);
if (!mhi_cntrl->regs) {
MHI_ERR("Error ioremap region\n");
goto error_ioremap;
}
ret = pci_alloc_irq_vectors(pci_dev, mhi_cntrl->msi_required,
mhi_cntrl->msi_required, PCI_IRQ_MSI);
if (IS_ERR_VALUE((ulong)ret) || ret < mhi_cntrl->msi_required) {
MHI_ERR("Failed to enable MSI, ret:%d\n", ret);
goto error_req_msi;
}
mhi_cntrl->msi_allocated = ret;
mhi_cntrl->irq = kmalloc_array(mhi_cntrl->msi_allocated,
sizeof(*mhi_cntrl->irq), GFP_KERNEL);
if (!mhi_cntrl->irq) {
ret = -ENOMEM;
goto error_alloc_msi_vec;
}
for (i = 0; i < mhi_cntrl->msi_allocated; i++) {
mhi_cntrl->irq[i] = pci_irq_vector(pci_dev, i);
if (mhi_cntrl->irq[i] < 0) {
ret = mhi_cntrl->irq[i];
goto error_get_irq_vec;
}
}
dev_set_drvdata(&pci_dev->dev, mhi_cntrl);
/* configure runtime pm */
pm_runtime_set_autosuspend_delay(&pci_dev->dev, MHI_RPM_SUSPEND_TMR_MS);
pm_runtime_use_autosuspend(&pci_dev->dev);
pm_suspend_ignore_children(&pci_dev->dev, true);
/*
* pci framework will increment usage count (twice) before
* calling local device driver probe function.
* 1st pci.c pci_pm_init() calls pm_runtime_forbid
* 2nd pci-driver.c local_pci_probe calls pm_runtime_get_sync
* Framework expect pci device driver to call
* pm_runtime_put_noidle to decrement usage count after
* successful probe and and call pm_runtime_allow to enable
* runtime suspend.
*/
pm_runtime_mark_last_busy(&pci_dev->dev);
pm_runtime_put_noidle(&pci_dev->dev);
return 0;
error_get_irq_vec:
kfree(mhi_cntrl->irq);
mhi_cntrl->irq = NULL;
error_alloc_msi_vec:
pci_free_irq_vectors(pci_dev);
error_req_msi:
iounmap(mhi_cntrl->regs);
error_ioremap:
pci_clear_master(pci_dev);
error_request_region:
pci_disable_device(pci_dev);
error_enable_device:
pci_release_region(pci_dev, mhi_dev->resn);
return ret;
}
static int mhi_runtime_suspend(struct device *dev)
{
int ret = 0;
struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
MHI_LOG("Enter\n");
mutex_lock(&mhi_cntrl->pm_mutex);
ret = mhi_pm_suspend(mhi_cntrl);
if (ret) {
MHI_LOG("Abort due to ret:%d\n", ret);
goto exit_runtime_suspend;
}
ret = mhi_arch_link_off(mhi_cntrl, true);
if (ret)
MHI_ERR("Failed to Turn off link ret:%d\n", ret);
exit_runtime_suspend:
mutex_unlock(&mhi_cntrl->pm_mutex);
MHI_LOG("Exited with ret:%d\n", ret);
return ret;
}
static int mhi_runtime_idle(struct device *dev)
{
struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
MHI_LOG("Entered returning -EBUSY\n");
/*
* RPM framework during runtime resume always calls
* rpm_idle to see if device ready to suspend.
* If dev.power usage_count count is 0, rpm fw will call
* rpm_idle cb to see if device is ready to suspend.
* if cb return 0, or cb not defined the framework will
* assume device driver is ready to suspend;
* therefore, fw will schedule runtime suspend.
* In MHI power management, MHI host shall go to
* runtime suspend only after entering MHI State M2, even if
* usage count is 0. Return -EBUSY to disable automatic suspend.
*/
return -EBUSY;
}
static int mhi_runtime_resume(struct device *dev)
{
int ret = 0;
struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
MHI_LOG("Enter\n");
mutex_lock(&mhi_cntrl->pm_mutex);
if (!mhi_dev->powered_on) {
MHI_LOG("Not fully powered, return success\n");
mutex_unlock(&mhi_cntrl->pm_mutex);
return 0;
}
/* turn on link */
ret = mhi_arch_link_on(mhi_cntrl);
if (ret)
goto rpm_resume_exit;
/* enter M0 state */
ret = mhi_pm_resume(mhi_cntrl);
rpm_resume_exit:
mutex_unlock(&mhi_cntrl->pm_mutex);
MHI_LOG("Exited with :%d\n", ret);
return ret;
}
static int mhi_system_resume(struct device *dev)
{
int ret = 0;
struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
ret = mhi_runtime_resume(dev);
if (ret) {
MHI_ERR("Failed to resume link\n");
} else {
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
}
return ret;
}
int mhi_system_suspend(struct device *dev)
{
struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
int ret;
MHI_LOG("Entered\n");
/* if rpm status still active then force suspend */
if (!pm_runtime_status_suspended(dev)) {
ret = mhi_runtime_suspend(dev);
if (ret) {
MHI_LOG("suspend failed ret:%d\n", ret);
return ret;
}
}
pm_runtime_set_suspended(dev);
pm_runtime_disable(dev);
MHI_LOG("Exit\n");
return 0;
}
/* checks if link is down */
static int mhi_link_status(struct mhi_controller *mhi_cntrl, void *priv)
{
struct mhi_dev *mhi_dev = priv;
u16 dev_id;
int ret;
/* try reading device id, if dev id don't match, link is down */
ret = pci_read_config_word(mhi_dev->pci_dev, PCI_DEVICE_ID, &dev_id);
return (ret || dev_id != mhi_cntrl->dev_id) ? -EIO : 0;
}
static int mhi_runtime_get(struct mhi_controller *mhi_cntrl, void *priv)
{
struct mhi_dev *mhi_dev = priv;
struct device *dev = &mhi_dev->pci_dev->dev;
return pm_runtime_get(dev);
}
static void mhi_runtime_put(struct mhi_controller *mhi_cntrl, void *priv)
{
struct mhi_dev *mhi_dev = priv;
struct device *dev = &mhi_dev->pci_dev->dev;
pm_runtime_put_noidle(dev);
}
static void mhi_status_cb(struct mhi_controller *mhi_cntrl,
void *priv,
enum MHI_CB reason)
{
struct mhi_dev *mhi_dev = priv;
struct device *dev = &mhi_dev->pci_dev->dev;
if (reason == MHI_CB_IDLE) {
MHI_LOG("Schedule runtime suspend 1\n");
pm_runtime_mark_last_busy(dev);
pm_request_autosuspend(dev);
}
}
int mhi_debugfs_trigger_m0(void *data, u64 val)
{
struct mhi_controller *mhi_cntrl = data;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
MHI_LOG("Trigger M3 Exit\n");
pm_runtime_get(&mhi_dev->pci_dev->dev);
pm_runtime_put(&mhi_dev->pci_dev->dev);
return 0;
}
int mhi_debugfs_trigger_m3(void *data, u64 val)
{
struct mhi_controller *mhi_cntrl = data;
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
MHI_LOG("Trigger M3 Entry\n");
pm_runtime_mark_last_busy(&mhi_dev->pci_dev->dev);
pm_request_autosuspend(&mhi_dev->pci_dev->dev);
return 0;
}
DEFINE_DEBUGFS_ATTRIBUTE(debugfs_trigger_m0_fops, NULL,
mhi_debugfs_trigger_m0, "%llu\n");
DEFINE_DEBUGFS_ATTRIBUTE(debugfs_trigger_m3_fops, NULL,
mhi_debugfs_trigger_m3, "%llu\n");
static ssize_t timeout_ms_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
/* buffer provided by sysfs has a minimum size of PAGE_SIZE */
return snprintf(buf, PAGE_SIZE, "%u\n", mhi_cntrl->timeout_ms);
}
static ssize_t timeout_ms_store(struct device *dev,
struct device_attribute *attr,
const char *buf,
size_t count)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
u32 timeout_ms;
if (kstrtou32(buf, 0, &timeout_ms) < 0)
return -EINVAL;
mhi_cntrl->timeout_ms = timeout_ms;
return count;
}
static DEVICE_ATTR_RW(timeout_ms);
static ssize_t power_up_store(struct device *dev,
struct device_attribute *attr,
const char *buf,
size_t count)
{
int ret;
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
ret = mhi_async_power_up(mhi_cntrl);
if (ret)
return ret;
return count;
}
static DEVICE_ATTR_WO(power_up);
static struct attribute *mhi_qcom_attrs[] = {
&dev_attr_timeout_ms.attr,
&dev_attr_power_up.attr,
NULL
};
static const struct attribute_group mhi_qcom_group = {
.attrs = mhi_qcom_attrs,
};
static struct mhi_controller *mhi_register_controller(struct pci_dev *pci_dev)
{
struct mhi_controller *mhi_cntrl;
struct mhi_dev *mhi_dev;
struct device_node *of_node = pci_dev->dev.of_node;
const struct firmware_info *firmware_info;
bool use_bb;
u64 addr_win[2];
int ret, i;
if (!of_node)
return ERR_PTR(-ENODEV);
mhi_cntrl = mhi_alloc_controller(sizeof(*mhi_dev));
if (!mhi_cntrl)
return ERR_PTR(-ENOMEM);
mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
mhi_cntrl->domain = pci_domain_nr(pci_dev->bus);
mhi_cntrl->dev_id = pci_dev->device;
mhi_cntrl->bus = pci_dev->bus->number;
mhi_cntrl->slot = PCI_SLOT(pci_dev->devfn);
ret = of_property_read_u32(of_node, "qcom,smmu-cfg",
&mhi_dev->smmu_cfg);
if (ret)
goto error_register;
use_bb = of_property_read_bool(of_node, "mhi,use-bb");
/*
* if s1 translation enabled or using bounce buffer pull iova addr
* from dt
*/
if (use_bb || (mhi_dev->smmu_cfg & MHI_SMMU_ATTACH &&
!(mhi_dev->smmu_cfg & MHI_SMMU_S1_BYPASS))) {
ret = of_property_count_elems_of_size(of_node, "qcom,addr-win",
sizeof(addr_win));
if (ret != 1)
goto error_register;
ret = of_property_read_u64_array(of_node, "qcom,addr-win",
addr_win, 2);
if (ret)
goto error_register;
} else {
addr_win[0] = memblock_start_of_DRAM();
addr_win[1] = memblock_end_of_DRAM();
}
mhi_dev->iova_start = addr_win[0];
mhi_dev->iova_stop = addr_win[1];
/*
* If S1 is enabled, set MHI_CTRL start address to 0 so we can use low
* level mapping api to map buffers outside of smmu domain
*/
if (mhi_dev->smmu_cfg & MHI_SMMU_ATTACH &&
!(mhi_dev->smmu_cfg & MHI_SMMU_S1_BYPASS))
mhi_cntrl->iova_start = 0;
else
mhi_cntrl->iova_start = addr_win[0];
mhi_cntrl->iova_stop = mhi_dev->iova_stop;
mhi_cntrl->of_node = of_node;
mhi_dev->pci_dev = pci_dev;
/* setup power management apis */
mhi_cntrl->status_cb = mhi_status_cb;
mhi_cntrl->runtime_get = mhi_runtime_get;
mhi_cntrl->runtime_put = mhi_runtime_put;
mhi_cntrl->link_status = mhi_link_status;
ret = of_register_mhi_controller(mhi_cntrl);
if (ret)
goto error_register;
for (i = 0; i < ARRAY_SIZE(firmware_table); i++) {
firmware_info = firmware_table + i;
/* debug mode always use default */
if (!debug_mode && mhi_cntrl->dev_id == firmware_info->dev_id)
break;
}
mhi_cntrl->fw_image = firmware_info->fw_image;
mhi_cntrl->edl_image = firmware_info->edl_image;
sysfs_create_group(&mhi_cntrl->mhi_dev->dev.kobj, &mhi_qcom_group);
return mhi_cntrl;
error_register:
mhi_free_controller(mhi_cntrl);
return ERR_PTR(-EINVAL);
}
int mhi_pci_probe(struct pci_dev *pci_dev,
const struct pci_device_id *device_id)
{
struct mhi_controller *mhi_cntrl;
u32 domain = pci_domain_nr(pci_dev->bus);
u32 bus = pci_dev->bus->number;
u32 dev_id = pci_dev->device;
u32 slot = PCI_SLOT(pci_dev->devfn);
struct mhi_dev *mhi_dev;
int ret;
/* see if we already registered */
mhi_cntrl = mhi_bdf_to_controller(domain, bus, slot, dev_id);
if (!mhi_cntrl)
mhi_cntrl = mhi_register_controller(pci_dev);
if (IS_ERR(mhi_cntrl))
return PTR_ERR(mhi_cntrl);
mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
mhi_dev->powered_on = true;
ret = mhi_arch_pcie_init(mhi_cntrl);
if (ret)
return ret;
ret = mhi_arch_iommu_init(mhi_cntrl);
if (ret)
goto error_iommu_init;
ret = mhi_init_pci_dev(mhi_cntrl);
if (ret)
goto error_init_pci;
/* start power up sequence */
if (!debug_mode) {
ret = mhi_async_power_up(mhi_cntrl);
if (ret)
goto error_power_up;
}
pm_runtime_mark_last_busy(&pci_dev->dev);
pm_runtime_allow(&pci_dev->dev);
if (mhi_cntrl->dentry) {
debugfs_create_file_unsafe("m0", 0444, mhi_cntrl->dentry,
mhi_cntrl, &debugfs_trigger_m0_fops);
debugfs_create_file_unsafe("m3", 0444, mhi_cntrl->dentry,
mhi_cntrl, &debugfs_trigger_m3_fops);
}
MHI_LOG("Return successful\n");
return 0;
error_power_up:
mhi_deinit_pci_dev(mhi_cntrl);
error_init_pci:
mhi_arch_iommu_deinit(mhi_cntrl);
error_iommu_init:
mhi_arch_pcie_deinit(mhi_cntrl);
return ret;
}
static const struct dev_pm_ops pm_ops = {
SET_RUNTIME_PM_OPS(mhi_runtime_suspend,
mhi_runtime_resume,
mhi_runtime_idle)
SET_SYSTEM_SLEEP_PM_OPS(mhi_system_suspend, mhi_system_resume)
};
static struct pci_device_id mhi_pcie_device_id[] = {
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0300)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0301)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0302)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0303)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0304)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0305)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0306)},
{PCI_DEVICE(MHI_PCIE_VENDOR_ID, MHI_PCIE_DEBUG_ID)},
{0},
};
static struct pci_driver mhi_pcie_driver = {
.name = "mhi",
.id_table = mhi_pcie_device_id,
.probe = mhi_pci_probe,
.driver = {
.pm = &pm_ops
}
};
module_pci_driver(mhi_pcie_driver);
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("MHI_CORE");
MODULE_DESCRIPTION("MHI Host Driver");

View file

@ -0,0 +1,82 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2018, The Linux Foundation. All rights reserved.*/
#ifndef _MHI_QCOM_
#define _MHI_QCOM_
/* iova cfg bitmask */
#define MHI_SMMU_ATTACH BIT(0)
#define MHI_SMMU_S1_BYPASS BIT(1)
#define MHI_SMMU_FAST BIT(2)
#define MHI_SMMU_ATOMIC BIT(3)
#define MHI_SMMU_FORCE_COHERENT BIT(4)
#define MHI_PCIE_VENDOR_ID (0x17cb)
#define MHI_PCIE_DEBUG_ID (0xffff)
#define MHI_RPM_SUSPEND_TMR_MS (1000)
#define MHI_PCI_BAR_NUM (0)
extern const char * const mhi_ee_str[MHI_EE_MAX];
#define TO_MHI_EXEC_STR(ee) (ee >= MHI_EE_MAX ? "INVALID_EE" : mhi_ee_str[ee])
struct mhi_dev {
struct pci_dev *pci_dev;
u32 smmu_cfg;
int resn;
void *arch_info;
bool powered_on;
dma_addr_t iova_start;
dma_addr_t iova_stop;
};
void mhi_deinit_pci_dev(struct mhi_controller *mhi_cntrl);
int mhi_pci_probe(struct pci_dev *pci_dev,
const struct pci_device_id *device_id);
#ifdef CONFIG_ARCH_QCOM
int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl);
void mhi_arch_pcie_deinit(struct mhi_controller *mhi_cntrl);
int mhi_arch_iommu_init(struct mhi_controller *mhi_cntrl);
void mhi_arch_iommu_deinit(struct mhi_controller *mhi_cntrl);
int mhi_arch_link_off(struct mhi_controller *mhi_cntrl, bool graceful);
int mhi_arch_link_on(struct mhi_controller *mhi_cntrl);
#else
static inline int mhi_arch_iommu_init(struct mhi_controller *mhi_cntrl)
{
struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
mhi_cntrl->dev = &mhi_dev->pci_dev->dev;
return dma_set_mask_and_coherent(mhi_cntrl->dev, DMA_BIT_MASK(64));
}
static inline void mhi_arch_iommu_deinit(struct mhi_controller *mhi_cntrl)
{
}
static inline int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl)
{
return 0;
}
static inline void mhi_arch_pcie_deinit(struct mhi_controller *mhi_cntrl)
{
}
static inline int mhi_arch_link_off(struct mhi_controller *mhi_cntrl,
bool graceful)
{
return 0;
}
static inline int mhi_arch_link_on(struct mhi_controller *mhi_cntrl)
{
return 0;
}
#endif
#endif /* _MHI_QCOM_ */

View file

@ -0,0 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_MHI_BUS) +=mhi_init.o mhi_main.o mhi_pm.o mhi_boot.o mhi_dtr.o

View file

@ -0,0 +1,579 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018, The Linux Foundation. All rights reserved. */
#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/firmware.h>
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/of.h>
#include <linux/module.h>
#include <linux/random.h>
#include <linux/slab.h>
#include <linux/wait.h>
#include <linux/mhi.h>
#include "mhi_internal.h"
/* setup rddm vector table for rddm transfer */
static void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info)
{
struct mhi_buf *mhi_buf = img_info->mhi_buf;
struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
int i = 0;
for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
MHI_VERB("Setting vector:%pad size:%zu\n",
&mhi_buf->dma_addr, mhi_buf->len);
bhi_vec->dma_addr = mhi_buf->dma_addr;
bhi_vec->size = mhi_buf->len;
}
}
/* collect rddm during kernel panic */
static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
{
int ret;
struct mhi_buf *mhi_buf;
u32 sequence_id;
u32 rx_status;
enum mhi_ee ee;
struct image_info *rddm_image = mhi_cntrl->rddm_image;
const u32 delayus = 100;
u32 retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
void __iomem *base = mhi_cntrl->bhie;
MHI_LOG("Entered with pm_state:%s dev_state:%s ee:%s\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
TO_MHI_EXEC_STR(mhi_cntrl->ee));
/*
* This should only be executing during a kernel panic, we expect all
* other cores to shutdown while we're collecting rddm buffer. After
* returning from this function, we expect device to reset.
*
* Normaly, we would read/write pm_state only after grabbing
* pm_lock, since we're in a panic, skipping it.
*/
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
return -EIO;
/*
* There is no gurantee this state change would take effect since
* we're setting it w/o grabbing pmlock, it's best effort
*/
mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT;
/* update should take the effect immediately */
smp_wmb();
/* setup the RX vector table */
mhi_rddm_prepare(mhi_cntrl, rddm_image);
mhi_buf = &rddm_image->mhi_buf[rddm_image->entries - 1];
MHI_LOG("Starting BHIe programming for RDDM\n");
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
upper_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS,
lower_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
sequence_id = prandom_u32() & BHIE_RXVECSTATUS_SEQNUM_BMSK;
if (unlikely(!sequence_id))
sequence_id = 1;
mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
sequence_id);
MHI_LOG("Trigger device into RDDM mode\n");
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
MHI_LOG("Waiting for image download completion\n");
while (retry--) {
ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
BHIE_RXVECSTATUS_STATUS_BMSK,
BHIE_RXVECSTATUS_STATUS_SHFT,
&rx_status);
if (ret)
return -EIO;
if (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) {
MHI_LOG("RDDM successfully collected\n");
return 0;
}
udelay(delayus);
}
ee = mhi_get_exec_env(mhi_cntrl);
ret = mhi_read_reg(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS, &rx_status);
MHI_ERR("Did not complete RDDM transfer\n");
MHI_ERR("Current EE:%s\n", TO_MHI_EXEC_STR(ee));
MHI_ERR("RXVEC_STATUS:0x%x, ret:%d\n", rx_status, ret);
return -EIO;
}
/* download ramdump image from device */
int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
{
void __iomem *base = mhi_cntrl->bhie;
rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
struct image_info *rddm_image = mhi_cntrl->rddm_image;
struct mhi_buf *mhi_buf;
int ret;
u32 rx_status;
u32 sequence_id;
if (!rddm_image)
return -ENOMEM;
if (in_panic)
return __mhi_download_rddm_in_panic(mhi_cntrl);
MHI_LOG("Waiting for device to enter RDDM state from EE:%s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee));
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->ee == MHI_EE_RDDM ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
MHI_ERR("MHI is not in valid state, pm_state:%s ee:%s\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_EXEC_STR(mhi_cntrl->ee));
return -EIO;
}
mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image);
/* vector table is the last entry */
mhi_buf = &rddm_image->mhi_buf[rddm_image->entries - 1];
read_lock_bh(pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
read_unlock_bh(pm_lock);
return -EIO;
}
MHI_LOG("Starting BHIe Programming for RDDM\n");
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
upper_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS,
lower_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
sequence_id = prandom_u32() & BHIE_RXVECSTATUS_SEQNUM_BMSK;
mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
sequence_id);
read_unlock_bh(pm_lock);
MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
upper_32_bits(mhi_buf->dma_addr),
lower_32_bits(mhi_buf->dma_addr),
mhi_buf->len, sequence_id);
MHI_LOG("Waiting for image download completion\n");
/* waiting for image download completion */
wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl, base,
BHIE_RXVECSTATUS_OFFS,
BHIE_RXVECSTATUS_STATUS_BMSK,
BHIE_RXVECSTATUS_STATUS_SHFT,
&rx_status) || rx_status,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
return -EIO;
return (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
}
EXPORT_SYMBOL(mhi_download_rddm_img);
static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
const struct mhi_buf *mhi_buf)
{
void __iomem *base = mhi_cntrl->bhie;
rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
u32 tx_status;
read_lock_bh(pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
read_unlock_bh(pm_lock);
return -EIO;
}
MHI_LOG("Starting BHIe Programming\n");
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_HIGH_OFFS,
upper_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_LOW_OFFS,
lower_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
mhi_cntrl->sequence_id = prandom_u32() & BHIE_TXVECSTATUS_SEQNUM_BMSK;
mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
mhi_cntrl->sequence_id);
read_unlock_bh(pm_lock);
MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
upper_32_bits(mhi_buf->dma_addr),
lower_32_bits(mhi_buf->dma_addr),
mhi_buf->len, mhi_cntrl->sequence_id);
MHI_LOG("Waiting for image transfer completion\n");
/* waiting for image download completion */
wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl, base,
BHIE_TXVECSTATUS_OFFS,
BHIE_TXVECSTATUS_STATUS_BMSK,
BHIE_TXVECSTATUS_STATUS_SHFT,
&tx_status) || tx_status,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
return -EIO;
return (tx_status == BHIE_TXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
}
static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
dma_addr_t dma_addr,
size_t size)
{
u32 tx_status, val;
int i, ret;
void __iomem *base = mhi_cntrl->bhi;
rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
struct {
char *name;
u32 offset;
} error_reg[] = {
{ "ERROR_CODE", BHI_ERRCODE },
{ "ERROR_DBG1", BHI_ERRDBG1 },
{ "ERROR_DBG2", BHI_ERRDBG2 },
{ "ERROR_DBG3", BHI_ERRDBG3 },
{ NULL },
};
MHI_LOG("Starting BHI programming\n");
/* program start sbl download via bhi protocol */
read_lock_bh(pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
read_unlock_bh(pm_lock);
goto invalid_pm_state;
}
mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH,
upper_32_bits(dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW,
lower_32_bits(dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
mhi_cntrl->session_id = prandom_u32() & BHI_TXDB_SEQNUM_BMSK;
mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, mhi_cntrl->session_id);
read_unlock_bh(pm_lock);
MHI_LOG("Waiting for image transfer completion\n");
/* waiting for image download completion */
wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
BHI_STATUS_MASK, BHI_STATUS_SHIFT,
&tx_status) || tx_status,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
goto invalid_pm_state;
if (tx_status == BHI_STATUS_ERROR) {
MHI_ERR("Image transfer failed\n");
read_lock_bh(pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
for (i = 0; error_reg[i].name; i++) {
ret = mhi_read_reg(mhi_cntrl, base,
error_reg[i].offset, &val);
if (ret)
break;
MHI_ERR("reg:%s value:0x%x\n",
error_reg[i].name, val);
}
}
read_unlock_bh(pm_lock);
goto invalid_pm_state;
}
return (tx_status == BHI_STATUS_SUCCESS) ? 0 : -ETIMEDOUT;
invalid_pm_state:
return -EIO;
}
void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info *image_info)
{
int i;
struct mhi_buf *mhi_buf = image_info->mhi_buf;
for (i = 0; i < image_info->entries; i++, mhi_buf++)
mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
mhi_buf->dma_addr);
kfree(image_info->mhi_buf);
kfree(image_info);
}
int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info **image_info,
size_t alloc_size)
{
size_t seg_size = mhi_cntrl->seg_len;
/* requier additional entry for vec table */
int segments = DIV_ROUND_UP(alloc_size, seg_size) + 1;
int i;
struct image_info *img_info;
struct mhi_buf *mhi_buf;
MHI_LOG("Allocating bytes:%zu seg_size:%zu total_seg:%u\n",
alloc_size, seg_size, segments);
img_info = kzalloc(sizeof(*img_info), GFP_KERNEL);
if (!img_info)
return -ENOMEM;
/* allocate memory for entries */
img_info->mhi_buf = kcalloc(segments, sizeof(*img_info->mhi_buf),
GFP_KERNEL);
if (!img_info->mhi_buf)
goto error_alloc_mhi_buf;
/* allocate and populate vector table */
mhi_buf = img_info->mhi_buf;
for (i = 0; i < segments; i++, mhi_buf++) {
size_t vec_size = seg_size;
/* last entry is for vector table */
if (i == segments - 1)
vec_size = sizeof(struct bhi_vec_entry) * i;
mhi_buf->len = vec_size;
mhi_buf->buf = mhi_alloc_coherent(mhi_cntrl, vec_size,
&mhi_buf->dma_addr, GFP_KERNEL);
if (!mhi_buf->buf)
goto error_alloc_segment;
MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
mhi_buf->dma_addr, mhi_buf->len);
}
img_info->bhi_vec = img_info->mhi_buf[segments - 1].buf;
img_info->entries = segments;
*image_info = img_info;
MHI_LOG("Successfully allocated bhi vec table\n");
return 0;
error_alloc_segment:
for (--i, --mhi_buf; i >= 0; i--, mhi_buf--)
mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
mhi_buf->dma_addr);
error_alloc_mhi_buf:
kfree(img_info);
return -ENOMEM;
}
static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
const struct firmware *firmware,
struct image_info *img_info)
{
size_t remainder = firmware->size;
size_t to_cpy;
const u8 *buf = firmware->data;
int i = 0;
struct mhi_buf *mhi_buf = img_info->mhi_buf;
struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
while (remainder) {
MHI_ASSERT(i >= img_info->entries, "malformed vector table");
to_cpy = min(remainder, mhi_buf->len);
memcpy(mhi_buf->buf, buf, to_cpy);
bhi_vec->dma_addr = mhi_buf->dma_addr;
bhi_vec->size = to_cpy;
MHI_VERB("Setting Vector:0x%llx size: %llu\n",
bhi_vec->dma_addr, bhi_vec->size);
buf += to_cpy;
remainder -= to_cpy;
i++;
bhi_vec++;
mhi_buf++;
}
}
void mhi_fw_load_worker(struct work_struct *work)
{
int ret;
struct mhi_controller *mhi_cntrl;
const char *fw_name;
const struct firmware *firmware;
struct image_info *image_info;
void *buf;
dma_addr_t dma_addr;
size_t size;
mhi_cntrl = container_of(work, struct mhi_controller, fw_worker);
MHI_LOG("Waiting for device to enter PBL from EE:%s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee));
ret = wait_event_timeout(mhi_cntrl->state_event,
MHI_IN_PBL(mhi_cntrl->ee) ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
MHI_ERR("MHI is not in valid state\n");
return;
}
MHI_LOG("Device current EE:%s\n", TO_MHI_EXEC_STR(mhi_cntrl->ee));
/* if device in pthru, we do not have to load firmware */
if (mhi_cntrl->ee == MHI_EE_PTHRU)
return;
fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ?
mhi_cntrl->edl_image : mhi_cntrl->fw_image;
if (!fw_name || (mhi_cntrl->fbc_download && (!mhi_cntrl->sbl_size ||
!mhi_cntrl->seg_len))) {
MHI_ERR("No firmware image defined or !sbl_size || !seg_len\n");
return;
}
ret = request_firmware(&firmware, fw_name, mhi_cntrl->dev);
if (ret) {
MHI_ERR("Error loading firmware, ret:%d\n", ret);
return;
}
size = (mhi_cntrl->fbc_download) ? mhi_cntrl->sbl_size : firmware->size;
/* the sbl size provided is maximum size, not necessarily image size */
if (size > firmware->size)
size = firmware->size;
buf = mhi_alloc_coherent(mhi_cntrl, size, &dma_addr, GFP_KERNEL);
if (!buf) {
MHI_ERR("Could not allocate memory for image\n");
release_firmware(firmware);
return;
}
/* load sbl image */
memcpy(buf, firmware->data, size);
ret = mhi_fw_load_sbl(mhi_cntrl, dma_addr, size);
mhi_free_coherent(mhi_cntrl, size, buf, dma_addr);
if (!mhi_cntrl->fbc_download || ret || mhi_cntrl->ee == MHI_EE_EDL)
release_firmware(firmware);
/* error or in edl, we're done */
if (ret || mhi_cntrl->ee == MHI_EE_EDL)
return;
write_lock_irq(&mhi_cntrl->pm_lock);
mhi_cntrl->dev_state = MHI_STATE_RESET;
write_unlock_irq(&mhi_cntrl->pm_lock);
/*
* if we're doing fbc, populate vector tables while
* device transitioning into MHI READY state
*/
if (mhi_cntrl->fbc_download) {
ret = mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->fbc_image,
firmware->size);
if (ret) {
MHI_ERR("Error alloc size of %zu\n", firmware->size);
goto error_alloc_fw_table;
}
MHI_LOG("Copying firmware image into vector table\n");
/* load the firmware into BHIE vec table */
mhi_firmware_copy(mhi_cntrl, firmware, mhi_cntrl->fbc_image);
}
/* transitioning into MHI RESET->READY state */
ret = mhi_ready_state_transition(mhi_cntrl);
MHI_LOG("To Reset->Ready PM_STATE:%s MHI_STATE:%s EE:%s, ret:%d\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
TO_MHI_EXEC_STR(mhi_cntrl->ee), ret);
if (!mhi_cntrl->fbc_download)
return;
if (ret) {
MHI_ERR("Did not transition to READY state\n");
goto error_read;
}
/* wait for SBL event */
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->ee == MHI_EE_SBL ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
MHI_ERR("MHI did not enter BHIE\n");
goto error_read;
}
/* start full firmware image download */
image_info = mhi_cntrl->fbc_image;
ret = mhi_fw_load_amss(mhi_cntrl,
/* last entry is vec table */
&image_info->mhi_buf[image_info->entries - 1]);
MHI_LOG("amss fw_load, ret:%d\n", ret);
release_firmware(firmware);
return;
error_read:
mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
mhi_cntrl->fbc_image = NULL;
error_alloc_fw_table:
release_firmware(firmware);
}

View file

@ -0,0 +1,224 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018, The Linux Foundation. All rights reserved.*/
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/of.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/termios.h>
#include <linux/wait.h>
#include <linux/mhi.h>
#include "mhi_internal.h"
struct __packed dtr_ctrl_msg {
u32 preamble;
u32 msg_id;
u32 dest_id;
u32 size;
u32 msg;
};
#define CTRL_MAGIC (0x4C525443)
#define CTRL_MSG_DTR BIT(0)
#define CTRL_MSG_RTS BIT(1)
#define CTRL_MSG_DCD BIT(0)
#define CTRL_MSG_DSR BIT(1)
#define CTRL_MSG_RI BIT(3)
#define CTRL_HOST_STATE (0x10)
#define CTRL_DEVICE_STATE (0x11)
#define CTRL_GET_CHID(dtr) (dtr->dest_id & 0xFF)
static int mhi_dtr_tiocmset(struct mhi_controller *mhi_cntrl,
struct mhi_device *mhi_dev,
u32 tiocm)
{
struct dtr_ctrl_msg *dtr_msg = NULL;
struct mhi_chan *dtr_chan = mhi_cntrl->dtr_dev->ul_chan;
spinlock_t *res_lock = &mhi_dev->dev.devres_lock;
u32 cur_tiocm;
int ret = 0;
cur_tiocm = mhi_dev->tiocm & ~(TIOCM_CD | TIOCM_DSR | TIOCM_RI);
tiocm &= (TIOCM_DTR | TIOCM_RTS);
/* state did not changed */
if (cur_tiocm == tiocm)
return 0;
mutex_lock(&dtr_chan->mutex);
dtr_msg = kzalloc(sizeof(*dtr_msg), GFP_KERNEL);
if (!dtr_msg) {
ret = -ENOMEM;
goto tiocm_exit;
}
dtr_msg->preamble = CTRL_MAGIC;
dtr_msg->msg_id = CTRL_HOST_STATE;
dtr_msg->dest_id = mhi_dev->ul_chan_id;
dtr_msg->size = sizeof(u32);
if (tiocm & TIOCM_DTR)
dtr_msg->msg |= CTRL_MSG_DTR;
if (tiocm & TIOCM_RTS)
dtr_msg->msg |= CTRL_MSG_RTS;
reinit_completion(&dtr_chan->completion);
ret = mhi_queue_transfer(mhi_cntrl->dtr_dev, DMA_TO_DEVICE, dtr_msg,
sizeof(*dtr_msg), MHI_EOT);
if (ret)
goto tiocm_exit;
ret = wait_for_completion_timeout(&dtr_chan->completion,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret) {
MHI_ERR("Failed to receive transfer callback\n");
ret = -EIO;
goto tiocm_exit;
}
ret = 0;
spin_lock_irq(res_lock);
mhi_dev->tiocm &= ~(TIOCM_DTR | TIOCM_RTS);
mhi_dev->tiocm |= tiocm;
spin_unlock_irq(res_lock);
tiocm_exit:
kfree(dtr_msg);
mutex_unlock(&dtr_chan->mutex);
return ret;
}
long mhi_ioctl(struct mhi_device *mhi_dev, unsigned int cmd, unsigned long arg)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
int ret;
/* ioctl not supported by this controller */
if (!mhi_cntrl->dtr_dev)
return -EIO;
switch (cmd) {
case TIOCMGET:
return mhi_dev->tiocm;
case TIOCMSET:
{
u32 tiocm;
ret = get_user(tiocm, (u32 *)arg);
if (ret)
return ret;
return mhi_dtr_tiocmset(mhi_cntrl, mhi_dev, tiocm);
}
default:
break;
}
return -EINVAL;
}
EXPORT_SYMBOL(mhi_ioctl);
static void mhi_dtr_dl_xfer_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
struct dtr_ctrl_msg *dtr_msg = mhi_result->buf_addr;
u32 chan;
spinlock_t *res_lock;
if (mhi_result->bytes_xferd != sizeof(*dtr_msg)) {
MHI_ERR("Unexpected length %zu received\n",
mhi_result->bytes_xferd);
return;
}
MHI_VERB("preamble:0x%x msg_id:%u dest_id:%u msg:0x%x\n",
dtr_msg->preamble, dtr_msg->msg_id, dtr_msg->dest_id,
dtr_msg->msg);
chan = CTRL_GET_CHID(dtr_msg);
if (chan >= mhi_cntrl->max_chan)
return;
mhi_dev = mhi_cntrl->mhi_chan[chan].mhi_dev;
if (!mhi_dev)
return;
res_lock = &mhi_dev->dev.devres_lock;
spin_lock_irq(res_lock);
mhi_dev->tiocm &= ~(TIOCM_CD | TIOCM_DSR | TIOCM_RI);
if (dtr_msg->msg & CTRL_MSG_DCD)
mhi_dev->tiocm |= TIOCM_CD;
if (dtr_msg->msg & CTRL_MSG_DSR)
mhi_dev->tiocm |= TIOCM_DSR;
if (dtr_msg->msg & CTRL_MSG_RI)
mhi_dev->tiocm |= TIOCM_RI;
spin_unlock_irq(res_lock);
}
static void mhi_dtr_ul_xfer_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
struct mhi_chan *dtr_chan = mhi_cntrl->dtr_dev->ul_chan;
MHI_VERB("Received with status:%d\n", mhi_result->transaction_status);
if (!mhi_result->transaction_status)
complete(&dtr_chan->completion);
}
static void mhi_dtr_remove(struct mhi_device *mhi_dev)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
mhi_cntrl->dtr_dev = NULL;
}
static int mhi_dtr_probe(struct mhi_device *mhi_dev,
const struct mhi_device_id *id)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
int ret;
MHI_LOG("Enter for DTR control channel\n");
ret = mhi_prepare_for_transfer(mhi_dev);
if (!ret)
mhi_cntrl->dtr_dev = mhi_dev;
MHI_LOG("Exit with ret:%d\n", ret);
return ret;
}
static const struct mhi_device_id mhi_dtr_table[] = {
{ .chan = "IP_CTRL" },
{},
};
static struct mhi_driver mhi_dtr_driver = {
.id_table = mhi_dtr_table,
.remove = mhi_dtr_remove,
.probe = mhi_dtr_probe,
.ul_xfer_cb = mhi_dtr_ul_xfer_cb,
.dl_xfer_cb = mhi_dtr_dl_xfer_cb,
.driver = {
.name = "MHI_DTR",
.owner = THIS_MODULE,
}
};
int __init mhi_dtr_init(void)
{
return mhi_driver_register(&mhi_dtr_driver);
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,790 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2018, The Linux Foundation. All rights reserved. */
#ifndef _MHI_INT_H
#define _MHI_INT_H
extern struct bus_type mhi_bus_type;
/* MHI mmio register mapping */
#define PCI_INVALID_READ(val) (val == U32_MAX)
#define MHIREGLEN (0x0)
#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
#define MHIREGLEN_MHIREGLEN_SHIFT (0)
#define MHIVER (0x8)
#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
#define MHIVER_MHIVER_SHIFT (0)
#define MHICFG (0x10)
#define MHICFG_NHWER_MASK (0xFF000000)
#define MHICFG_NHWER_SHIFT (24)
#define MHICFG_NER_MASK (0xFF0000)
#define MHICFG_NER_SHIFT (16)
#define MHICFG_NHWCH_MASK (0xFF00)
#define MHICFG_NHWCH_SHIFT (8)
#define MHICFG_NCH_MASK (0xFF)
#define MHICFG_NCH_SHIFT (0)
#define CHDBOFF (0x18)
#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
#define CHDBOFF_CHDBOFF_SHIFT (0)
#define ERDBOFF (0x20)
#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
#define ERDBOFF_ERDBOFF_SHIFT (0)
#define BHIOFF (0x28)
#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
#define BHIOFF_BHIOFF_SHIFT (0)
#define BHIEOFF (0x2C)
#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
#define BHIEOFF_BHIEOFF_SHIFT (0)
#define DEBUGOFF (0x30)
#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
#define DEBUGOFF_DEBUGOFF_SHIFT (0)
#define MHICTRL (0x38)
#define MHICTRL_MHISTATE_MASK (0x0000FF00)
#define MHICTRL_MHISTATE_SHIFT (8)
#define MHICTRL_RESET_MASK (0x2)
#define MHICTRL_RESET_SHIFT (1)
#define MHISTATUS (0x48)
#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
#define MHISTATUS_MHISTATE_SHIFT (8)
#define MHISTATUS_SYSERR_MASK (0x4)
#define MHISTATUS_SYSERR_SHIFT (2)
#define MHISTATUS_READY_MASK (0x1)
#define MHISTATUS_READY_SHIFT (0)
#define CCABAP_LOWER (0x58)
#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
#define CCABAP_HIGHER (0x5C)
#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
#define ECABAP_LOWER (0x60)
#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
#define ECABAP_HIGHER (0x64)
#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
#define CRCBAP_LOWER (0x68)
#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
#define CRCBAP_HIGHER (0x6C)
#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
#define CRDB_LOWER (0x70)
#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
#define CRDB_HIGHER (0x74)
#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
#define MHICTRLBASE_LOWER (0x80)
#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
#define MHICTRLBASE_HIGHER (0x84)
#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
#define MHICTRLLIMIT_LOWER (0x88)
#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
#define MHICTRLLIMIT_HIGHER (0x8C)
#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
#define MHIDATABASE_LOWER (0x98)
#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
#define MHIDATABASE_HIGHER (0x9C)
#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
#define MHIDATALIMIT_LOWER (0xA0)
#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
#define MHIDATALIMIT_HIGHER (0xA4)
#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
/* MHI misc capability registers */
#define MISC_OFFSET (0x24)
#define MISC_CAP_MASK (0xFFFFFFFF)
#define MISC_CAP_SHIFT (0)
#define CAP_CAPID_MASK (0xFF000000)
#define CAP_CAPID_SHIFT (24)
#define CAP_NEXT_CAP_MASK (0x00FFF000)
#define CAP_NEXT_CAP_SHIFT (12)
/* MHI Timesync offsets */
#define TIMESYNC_CFG_OFFSET (0x00)
#define TIMESYNC_CFG_CAPID_MASK (CAP_CAPID_MASK)
#define TIMESYNC_CFG_CAPID_SHIFT (CAP_CAPID_SHIFT)
#define TIMESYNC_CFG_NEXT_OFF_MASK (CAP_NEXT_CAP_MASK)
#define TIMESYNC_CFG_NEXT_OFF_SHIFT (CAP_NEXT_CAP_SHIFT)
#define TIMESYNC_CFG_NUMCMD_MASK (0xFF)
#define TIMESYNC_CFG_NUMCMD_SHIFT (0)
#define TIMESYNC_TIME_LOW_OFFSET (0x4)
#define TIMESYNC_TIME_HIGH_OFFSET (0x8)
#define TIMESYNC_DB_OFFSET (0xC)
#define TIMESYNC_CAP_ID (2)
/* MHI BHI offfsets */
#define BHI_BHIVERSION_MINOR (0x00)
#define BHI_BHIVERSION_MAJOR (0x04)
#define BHI_IMGADDR_LOW (0x08)
#define BHI_IMGADDR_HIGH (0x0C)
#define BHI_IMGSIZE (0x10)
#define BHI_RSVD1 (0x14)
#define BHI_IMGTXDB (0x18)
#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
#define BHI_TXDB_SEQNUM_SHFT (0)
#define BHI_RSVD2 (0x1C)
#define BHI_INTVEC (0x20)
#define BHI_RSVD3 (0x24)
#define BHI_EXECENV (0x28)
#define BHI_STATUS (0x2C)
#define BHI_ERRCODE (0x30)
#define BHI_ERRDBG1 (0x34)
#define BHI_ERRDBG2 (0x38)
#define BHI_ERRDBG3 (0x3C)
#define BHI_SERIALNU (0x40)
#define BHI_SBLANTIROLLVER (0x44)
#define BHI_NUMSEG (0x48)
#define BHI_MSMHWID(n) (0x4C + (0x4 * n))
#define BHI_OEMPKHASH(n) (0x64 + (0x4 * n))
#define BHI_RSVD5 (0xC4)
#define BHI_STATUS_MASK (0xC0000000)
#define BHI_STATUS_SHIFT (30)
#define BHI_STATUS_ERROR (3)
#define BHI_STATUS_SUCCESS (2)
#define BHI_STATUS_RESET (0)
/* MHI BHIE offsets */
#define BHIE_MSMSOCID_OFFS (0x0000)
#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
#define BHIE_TXVECSIZE_OFFS (0x0034)
#define BHIE_TXVECDB_OFFS (0x003C)
#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_TXVECDB_SEQNUM_SHFT (0)
#define BHIE_TXVECSTATUS_OFFS (0x0044)
#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
#define BHIE_RXVECSIZE_OFFS (0x0068)
#define BHIE_RXVECDB_OFFS (0x0070)
#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_RXVECDB_SEQNUM_SHFT (0)
#define BHIE_RXVECSTATUS_OFFS (0x0078)
#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
struct mhi_event_ctxt {
u32 reserved : 8;
u32 intmodc : 8;
u32 intmodt : 16;
u32 ertype;
u32 msivec;
u64 rbase __packed __aligned(4);
u64 rlen __packed __aligned(4);
u64 rp __packed __aligned(4);
u64 wp __packed __aligned(4);
};
struct mhi_chan_ctxt {
u32 chstate : 8;
u32 brstmode : 2;
u32 pollcfg : 6;
u32 reserved : 16;
u32 chtype;
u32 erindex;
u64 rbase __packed __aligned(4);
u64 rlen __packed __aligned(4);
u64 rp __packed __aligned(4);
u64 wp __packed __aligned(4);
};
struct mhi_cmd_ctxt {
u32 reserved0;
u32 reserved1;
u32 reserved2;
u64 rbase __packed __aligned(4);
u64 rlen __packed __aligned(4);
u64 rp __packed __aligned(4);
u64 wp __packed __aligned(4);
};
struct mhi_tre {
u64 ptr;
u32 dword[2];
};
struct bhi_vec_entry {
u64 dma_addr;
u64 size;
};
enum mhi_cmd_type {
MHI_CMD_TYPE_NOP = 1,
MHI_CMD_TYPE_RESET = 16,
MHI_CMD_TYPE_STOP = 17,
MHI_CMD_TYPE_START = 18,
MHI_CMD_TYPE_TSYNC = 24,
};
/* no operation command */
#define MHI_TRE_CMD_NOOP_PTR (0)
#define MHI_TRE_CMD_NOOP_DWORD0 (0)
#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_TYPE_NOP << 16)
/* channel reset command */
#define MHI_TRE_CMD_RESET_PTR (0)
#define MHI_TRE_CMD_RESET_DWORD0 (0)
#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
(MHI_CMD_TYPE_RESET << 16))
/* channel stop command */
#define MHI_TRE_CMD_STOP_PTR (0)
#define MHI_TRE_CMD_STOP_DWORD0 (0)
#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | (MHI_CMD_TYPE_STOP << 16))
/* channel start command */
#define MHI_TRE_CMD_START_PTR (0)
#define MHI_TRE_CMD_START_DWORD0 (0)
#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
(MHI_CMD_TYPE_START << 16))
/* time sync cfg command */
#define MHI_TRE_CMD_TSYNC_CFG_PTR (0)
#define MHI_TRE_CMD_TSYNC_CFG_DWORD0 (0)
#define MHI_TRE_CMD_TSYNC_CFG_DWORD1(er) ((MHI_CMD_TYPE_TSYNC << 16) | \
(er << 24))
#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
/* event descriptor macros */
#define MHI_TRE_EV_PTR(ptr) (ptr)
#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
/* transfer descriptor macros */
#define MHI_TRE_DATA_PTR(ptr) (ptr)
#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
| (ieot << 9) | (ieob << 8) | chain)
enum MHI_CMD {
MHI_CMD_RESET_CHAN,
MHI_CMD_START_CHAN,
MHI_CMD_TIMSYNC_CFG,
};
enum MHI_PKT_TYPE {
MHI_PKT_TYPE_INVALID = 0x0,
MHI_PKT_TYPE_NOOP_CMD = 0x1,
MHI_PKT_TYPE_TRANSFER = 0x2,
MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
MHI_PKT_TYPE_TX_EVENT = 0x22,
MHI_PKT_TYPE_EE_EVENT = 0x40,
MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
MHI_PKT_TYPE_STALE_EVENT, /* internal event */
};
/* MHI transfer completion events */
enum MHI_EV_CCS {
MHI_EV_CC_INVALID = 0x0,
MHI_EV_CC_SUCCESS = 0x1,
MHI_EV_CC_EOT = 0x2,
MHI_EV_CC_OVERFLOW = 0x3,
MHI_EV_CC_EOB = 0x4,
MHI_EV_CC_OOB = 0x5,
MHI_EV_CC_DB_MODE = 0x6,
MHI_EV_CC_UNDEFINED_ERR = 0x10,
MHI_EV_CC_BAD_TRE = 0x11,
};
enum MHI_CH_STATE {
MHI_CH_STATE_DISABLED = 0x0,
MHI_CH_STATE_ENABLED = 0x1,
MHI_CH_STATE_RUNNING = 0x2,
MHI_CH_STATE_SUSPENDED = 0x3,
MHI_CH_STATE_STOP = 0x4,
MHI_CH_STATE_ERROR = 0x5,
};
enum MHI_BRSTMODE {
MHI_BRSTMODE_DISABLE = 0x2,
MHI_BRSTMODE_ENABLE = 0x3,
};
#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_BRSTMODE_DISABLE && \
mode != MHI_BRSTMODE_ENABLE)
extern const char * const mhi_ee_str[MHI_EE_MAX];
#define TO_MHI_EXEC_STR(ee) (((ee) >= MHI_EE_MAX) ? \
"INVALID_EE" : mhi_ee_str[ee])
#define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \
ee == MHI_EE_EDL)
#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW)
enum MHI_ST_TRANSITION {
MHI_ST_TRANSITION_PBL,
MHI_ST_TRANSITION_READY,
MHI_ST_TRANSITION_SBL,
MHI_ST_TRANSITION_MISSION_MODE,
MHI_ST_TRANSITION_MAX,
};
extern const char * const mhi_state_tran_str[MHI_ST_TRANSITION_MAX];
#define TO_MHI_STATE_TRANS_STR(state) (((state) >= MHI_ST_TRANSITION_MAX) ? \
"INVALID_STATE" : mhi_state_tran_str[state])
extern const char * const mhi_state_str[MHI_STATE_MAX];
#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
!mhi_state_str[state]) ? \
"INVALID_STATE" : mhi_state_str[state])
enum {
MHI_PM_BIT_DISABLE,
MHI_PM_BIT_POR,
MHI_PM_BIT_M0,
MHI_PM_BIT_M2,
MHI_PM_BIT_M3_ENTER,
MHI_PM_BIT_M3,
MHI_PM_BIT_M3_EXIT,
MHI_PM_BIT_FW_DL_ERR,
MHI_PM_BIT_SYS_ERR_DETECT,
MHI_PM_BIT_SYS_ERR_PROCESS,
MHI_PM_BIT_SHUTDOWN_PROCESS,
MHI_PM_BIT_LD_ERR_FATAL_DETECT,
MHI_PM_BIT_MAX
};
/* internal power states */
enum MHI_PM_STATE {
MHI_PM_DISABLE = BIT(MHI_PM_BIT_DISABLE), /* MHI is not enabled */
MHI_PM_POR = BIT(MHI_PM_BIT_POR), /* reset state */
MHI_PM_M0 = BIT(MHI_PM_BIT_M0),
MHI_PM_M2 = BIT(MHI_PM_BIT_M2),
MHI_PM_M3_ENTER = BIT(MHI_PM_BIT_M3_ENTER),
MHI_PM_M3 = BIT(MHI_PM_BIT_M3),
MHI_PM_M3_EXIT = BIT(MHI_PM_BIT_M3_EXIT),
/* firmware download failure state */
MHI_PM_FW_DL_ERR = BIT(MHI_PM_BIT_FW_DL_ERR),
MHI_PM_SYS_ERR_DETECT = BIT(MHI_PM_BIT_SYS_ERR_DETECT),
MHI_PM_SYS_ERR_PROCESS = BIT(MHI_PM_BIT_SYS_ERR_PROCESS),
MHI_PM_SHUTDOWN_PROCESS = BIT(MHI_PM_BIT_SHUTDOWN_PROCESS),
/* link not accessible */
MHI_PM_LD_ERR_FATAL_DETECT = BIT(MHI_PM_BIT_LD_ERR_FATAL_DETECT),
};
#define MHI_REG_ACCESS_VALID(pm_state) ((pm_state & (MHI_PM_POR | MHI_PM_M0 | \
MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_M3_EXIT | \
MHI_PM_SYS_ERR_DETECT | MHI_PM_SYS_ERR_PROCESS | \
MHI_PM_SHUTDOWN_PROCESS | MHI_PM_FW_DL_ERR)))
#define MHI_PM_IN_ERROR_STATE(pm_state) (pm_state >= MHI_PM_FW_DL_ERR)
#define MHI_PM_IN_FATAL_STATE(pm_state) (pm_state == MHI_PM_LD_ERR_FATAL_DETECT)
#define MHI_DB_ACCESS_VALID(pm_state) (pm_state & MHI_PM_M0)
#define MHI_WAKE_DB_CLEAR_VALID(pm_state) (pm_state & (MHI_PM_M0 | \
MHI_PM_M2))
#define MHI_WAKE_DB_SET_VALID(pm_state) (pm_state & MHI_PM_M2)
#define MHI_WAKE_DB_FORCE_SET_VALID(pm_state) MHI_WAKE_DB_CLEAR_VALID(pm_state)
#define MHI_EVENT_ACCESS_INVALID(pm_state) (pm_state == MHI_PM_DISABLE || \
MHI_PM_IN_ERROR_STATE(pm_state))
#define MHI_PM_IN_SUSPEND_STATE(pm_state) (pm_state & \
(MHI_PM_M3_ENTER | MHI_PM_M3))
/* accepted buffer type for the channel */
enum MHI_XFER_TYPE {
MHI_XFER_BUFFER,
MHI_XFER_SKB,
MHI_XFER_SCLIST,
MHI_XFER_NOP, /* CPU offload channel, host does not accept transfer */
};
#define NR_OF_CMD_RINGS (1)
#define CMD_EL_PER_RING (128)
#define PRIMARY_CMD_RING (0)
#define MHI_DEV_WAKE_DB (127)
#define MHI_MAX_MTU (0xffff)
enum MHI_ER_TYPE {
MHI_ER_TYPE_INVALID = 0x0,
MHI_ER_TYPE_VALID = 0x1,
};
enum mhi_er_data_type {
MHI_ER_DATA_ELEMENT_TYPE,
MHI_ER_CTRL_ELEMENT_TYPE,
MHI_ER_TSYNC_ELEMENT_TYPE,
MHI_ER_DATA_TYPE_MAX = MHI_ER_TSYNC_ELEMENT_TYPE,
};
enum mhi_ch_ee_mask {
MHI_CH_EE_PBL = BIT(MHI_EE_PBL),
MHI_CH_EE_SBL = BIT(MHI_EE_SBL),
MHI_CH_EE_AMSS = BIT(MHI_EE_AMSS),
MHI_CH_EE_RDDM = BIT(MHI_EE_RDDM),
MHI_CH_EE_PTHRU = BIT(MHI_EE_PTHRU),
MHI_CH_EE_WFW = BIT(MHI_EE_WFW),
MHI_CH_EE_EDL = BIT(MHI_EE_EDL),
};
struct db_cfg {
bool reset_req;
bool db_mode;
u32 pollcfg;
enum MHI_BRSTMODE brstmode;
dma_addr_t db_val;
void (*process_db)(struct mhi_controller *mhi_cntrl,
struct db_cfg *db_cfg, void __iomem *io_addr,
dma_addr_t db_val);
};
struct mhi_pm_transitions {
enum MHI_PM_STATE from_state;
u32 to_states;
};
struct state_transition {
struct list_head node;
enum MHI_ST_TRANSITION state;
};
struct mhi_ctxt {
struct mhi_event_ctxt *er_ctxt;
struct mhi_chan_ctxt *chan_ctxt;
struct mhi_cmd_ctxt *cmd_ctxt;
dma_addr_t er_ctxt_addr;
dma_addr_t chan_ctxt_addr;
dma_addr_t cmd_ctxt_addr;
};
struct mhi_ring {
dma_addr_t dma_handle;
dma_addr_t iommu_base;
u64 *ctxt_wp; /* point to ctxt wp */
void *pre_aligned;
void *base;
void *rp;
void *wp;
size_t el_size;
size_t len;
size_t elements;
size_t alloc_size;
void __iomem *db_addr;
};
struct mhi_cmd {
struct mhi_ring ring;
spinlock_t lock;
};
struct mhi_buf_info {
dma_addr_t p_addr;
void *v_addr;
void *bb_addr;
void *wp;
size_t len;
void *cb_buf;
enum dma_data_direction dir;
};
struct mhi_event {
u32 er_index;
u32 intmod;
u32 msi;
int chan; /* this event ring is dedicated to a channel */
u32 priority;
enum mhi_er_data_type data_type;
struct mhi_ring ring;
struct db_cfg db_cfg;
bool hw_ring;
bool cl_manage;
bool offload_ev; /* managed by a device driver */
spinlock_t lock;
struct mhi_chan *mhi_chan; /* dedicated to channel */
struct tasklet_struct task;
int (*process_event)(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event,
u32 event_quota);
struct mhi_controller *mhi_cntrl;
};
struct mhi_chan {
u32 chan;
const char *name;
/*
* important, when consuming increment tre_ring first, when releasing
* decrement buf_ring first. If tre_ring has space, buf_ring
* guranteed to have space so we do not need to check both rings.
*/
struct mhi_ring buf_ring;
struct mhi_ring tre_ring;
u32 er_index;
u32 intmod;
enum dma_data_direction dir;
struct db_cfg db_cfg;
u32 ee_mask;
enum MHI_XFER_TYPE xfer_type;
enum MHI_CH_STATE ch_state;
enum MHI_EV_CCS ccs;
bool lpm_notify;
bool configured;
bool offload_ch;
bool pre_alloc;
bool auto_start;
/* functions that generate the transfer ring elements */
int (*gen_tre)(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan, void *buf, void *cb,
size_t len, enum MHI_FLAGS flags);
int (*queue_xfer)(struct mhi_device *mhi_dev,
struct mhi_chan *mhi_chan, void *buf,
size_t len, enum MHI_FLAGS flags);
/* xfer call back */
struct mhi_device *mhi_dev;
void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result);
struct mutex mutex;
struct completion completion;
rwlock_t lock;
struct list_head node;
};
struct tsync_node {
struct list_head node;
u32 sequence;
u64 local_time;
u64 remote_time;
struct mhi_device *mhi_dev;
void (*cb_func)(struct mhi_device *mhi_dev, u32 sequence,
u64 local_time, u64 remote_time);
};
struct mhi_timesync {
u32 er_index;
void __iomem *db;
enum MHI_EV_CCS ccs;
struct completion completion;
spinlock_t lock;
struct list_head head;
};
struct mhi_bus {
struct list_head controller_list;
struct mutex lock;
};
/* default MHI timeout */
#define MHI_TIMEOUT_MS (1000)
extern struct mhi_bus mhi_bus;
/* debug fs related functions */
int mhi_debugfs_mhi_chan_show(struct seq_file *m, void *d);
int mhi_debugfs_mhi_event_show(struct seq_file *m, void *d);
int mhi_debugfs_mhi_states_show(struct seq_file *m, void *d);
int mhi_debugfs_trigger_reset(void *data, u64 val);
void mhi_deinit_debugfs(struct mhi_controller *mhi_cntrl);
void mhi_init_debugfs(struct mhi_controller *mhi_cntrl);
/* power management apis */
enum MHI_PM_STATE __must_check mhi_tryset_pm_state(
struct mhi_controller *mhi_cntrl,
enum MHI_PM_STATE state);
const char *to_mhi_pm_state_str(enum MHI_PM_STATE state);
void mhi_reset_chan(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
enum mhi_ee mhi_get_exec_env(struct mhi_controller *mhi_cntrl);
enum mhi_dev_state mhi_get_m_state(struct mhi_controller *mhi_cntrl);
int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
enum MHI_ST_TRANSITION state);
void mhi_pm_st_worker(struct work_struct *work);
void mhi_fw_load_worker(struct work_struct *work);
void mhi_pm_sys_err_worker(struct work_struct *work);
int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
void mhi_ctrl_ev_task(unsigned long data);
int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
void mhi_notify(struct mhi_device *mhi_dev, enum MHI_CB cb_reason);
int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
int mhi_process_tsync_event_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
enum MHI_CMD cmd);
int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
/* queue transfer buffer */
int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
void *buf, void *cb, size_t buf_len, enum MHI_FLAGS flags);
int mhi_queue_buf(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t len, enum MHI_FLAGS mflags);
int mhi_queue_skb(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t len, enum MHI_FLAGS mflags);
int mhi_queue_sclist(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t len, enum MHI_FLAGS mflags);
int mhi_queue_nop(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t len, enum MHI_FLAGS mflags);
/* register access methods */
void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
void __iomem *db_addr, dma_addr_t wp);
void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
struct db_cfg *db_mode, void __iomem *db_addr,
dma_addr_t wp);
int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 *out);
int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 mask,
u32 shift, u32 *out);
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 val);
void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 mask, u32 shift, u32 val);
void mhi_ring_er_db(struct mhi_event *mhi_event);
void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
dma_addr_t wp);
void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd);
void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl,
enum mhi_dev_state state);
int mhi_get_capability_offset(struct mhi_controller *mhi_cntrl, u32 capability,
u32 *offset);
int mhi_init_timesync(struct mhi_controller *mhi_cntrl);
/* memory allocation methods */
static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
size_t size,
dma_addr_t *dma_handle,
gfp_t gfp)
{
void *buf = dma_zalloc_coherent(mhi_cntrl->dev, size, dma_handle, gfp);
if (buf)
atomic_add(size, &mhi_cntrl->alloc_size);
return buf;
}
static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl,
size_t size,
void *vaddr,
dma_addr_t dma_handle)
{
atomic_sub(size, &mhi_cntrl->alloc_size);
dma_free_coherent(mhi_cntrl->dev, size, vaddr, dma_handle);
}
struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
struct mhi_device *mhi_dev)
{
kfree(mhi_dev);
}
int mhi_destroy_device(struct device *dev, void *data);
void mhi_create_devices(struct mhi_controller *mhi_cntrl);
int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info **image_info, size_t alloc_size);
void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info *image_info);
int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
/* initialization methods */
int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
int mhi_init_mmio(struct mhi_controller *mhi_cntrl);
int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl);
void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl);
int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
int mhi_dtr_init(void);
/* isr handlers */
irqreturn_t mhi_msi_handlr(int irq_number, void *dev);
irqreturn_t mhi_intvec_threaded_handlr(int irq_number, void *dev);
irqreturn_t mhi_intvec_handlr(int irq_number, void *dev);
void mhi_ev_task(unsigned long data);
#ifdef CONFIG_MHI_DEBUG
#define MHI_ASSERT(cond, msg) do { \
if (cond) \
panic(msg); \
} while (0)
#else
#define MHI_ASSERT(cond, msg) do { \
if (cond) { \
MHI_ERR(msg); \
WARN_ON(cond); \
} \
} while (0)
#endif
#endif /* _MHI_INT_H */

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,23 @@
# SPDX-License-Identifier: GPL-2.0
menu "MHI device support"
config MHI_NETDEV
tristate "MHI NETDEV"
depends on MHI_BUS
help
MHI based net device driver for transferring IP traffic
between host and modem. By enabling this driver, clients
can transfer data using standard network interface. Over
the air traffic goes thru mhi netdev interface.
config MHI_UCI
tristate "MHI UCI"
depends on MHI_BUS
help
MHI based uci driver is for transferring data between host and
modem using standard file operations from user space. Open, read,
write, ioctl, and close operations are supported by this driver.
Please check mhi_uci_match_table for all supported channels that
are exposed to userspace.
endmenu

View file

@ -0,0 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_MHI_NETDEV) +=mhi_netdev.o
obj-$(CONFIG_MHI_UCI) +=mhi_uci.o

View file

@ -0,0 +1,997 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018, The Linux Foundation. All rights reserved.*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <linux/msm_rmnet.h>
#include <linux/if_arp.h>
#include <linux/dma-mapping.h>
#include <linux/debugfs.h>
#include <linux/ipc_logging.h>
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/of_device.h>
#include <linux/rtnetlink.h>
#include <linux/mhi.h>
#define MHI_NETDEV_DRIVER_NAME "mhi_netdev"
#define WATCHDOG_TIMEOUT (30 * HZ)
#define IPC_LOG_PAGES (100)
#ifdef CONFIG_MHI_DEBUG
#define IPC_LOG_LVL (MHI_MSG_LVL_VERBOSE)
#define MHI_ASSERT(cond, msg) do { \
if (cond) \
panic(msg); \
} while (0)
#define MSG_VERB(fmt, ...) do { \
if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_VERBOSE) \
pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
if (mhi_netdev->ipc_log && (mhi_netdev->ipc_log_lvl <= \
MHI_MSG_LVL_VERBOSE)) \
ipc_log_string(mhi_netdev->ipc_log, "[D][%s] " fmt, \
__func__, ##__VA_ARGS__); \
} while (0)
#else
#define IPC_LOG_LVL (MHI_MSG_LVL_ERROR)
#define MHI_ASSERT(cond, msg) do { \
if (cond) { \
MSG_ERR(msg); \
WARN_ON(cond); \
} \
} while (0)
#define MSG_VERB(fmt, ...)
#endif
#define MSG_LOG(fmt, ...) do { \
if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_INFO) \
pr_err("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
if (mhi_netdev->ipc_log && (mhi_netdev->ipc_log_lvl <= \
MHI_MSG_LVL_INFO)) \
ipc_log_string(mhi_netdev->ipc_log, "[I][%s] " fmt, \
__func__, ##__VA_ARGS__); \
} while (0)
#define MSG_ERR(fmt, ...) do { \
if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_ERROR) \
pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
if (mhi_netdev->ipc_log && (mhi_netdev->ipc_log_lvl <= \
MHI_MSG_LVL_ERROR)) \
ipc_log_string(mhi_netdev->ipc_log, "[E][%s] " fmt, \
__func__, ##__VA_ARGS__); \
} while (0)
struct mhi_stats {
u32 rx_int;
u32 tx_full;
u32 tx_pkts;
u32 rx_budget_overflow;
u32 rx_frag;
u32 alloc_failed;
};
/* important: do not exceed sk_buf->cb (48 bytes) */
struct mhi_skb_priv {
void *buf;
size_t size;
struct mhi_netdev *mhi_netdev;
};
struct mhi_netdev {
int alias;
struct mhi_device *mhi_dev;
spinlock_t rx_lock;
bool enabled;
rwlock_t pm_lock; /* state change lock */
int (*rx_queue)(struct mhi_netdev *mhi_netdev, gfp_t gfp_t);
struct work_struct alloc_work;
int wake;
struct sk_buff_head rx_allocated;
u32 mru;
const char *interface_name;
struct napi_struct napi;
struct net_device *ndev;
struct sk_buff *frag_skb;
bool recycle_buf;
struct mhi_stats stats;
struct dentry *dentry;
enum MHI_DEBUG_LEVEL msg_lvl;
enum MHI_DEBUG_LEVEL ipc_log_lvl;
void *ipc_log;
};
struct mhi_netdev_priv {
struct mhi_netdev *mhi_netdev;
};
static struct mhi_driver mhi_netdev_driver;
static void mhi_netdev_create_debugfs(struct mhi_netdev *mhi_netdev);
static __be16 mhi_netdev_ip_type_trans(struct sk_buff *skb)
{
__be16 protocol = 0;
/* determine L3 protocol */
switch (skb->data[0] & 0xf0) {
case 0x40:
protocol = htons(ETH_P_IP);
break;
case 0x60:
protocol = htons(ETH_P_IPV6);
break;
default:
/* default is QMAP */
protocol = htons(ETH_P_MAP);
break;
}
return protocol;
}
static void mhi_netdev_skb_destructor(struct sk_buff *skb)
{
struct mhi_skb_priv *skb_priv = (struct mhi_skb_priv *)(skb->cb);
struct mhi_netdev *mhi_netdev = skb_priv->mhi_netdev;
skb->data = skb->head;
skb_reset_tail_pointer(skb);
skb->len = 0;
MHI_ASSERT(skb->data != skb_priv->buf, "incorrect buf");
skb_queue_tail(&mhi_netdev->rx_allocated, skb);
}
static int mhi_netdev_alloc_skb(struct mhi_netdev *mhi_netdev, gfp_t gfp_t)
{
u32 cur_mru = mhi_netdev->mru;
struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
struct mhi_skb_priv *skb_priv;
int ret;
struct sk_buff *skb;
int no_tre = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
int i;
for (i = 0; i < no_tre; i++) {
skb = alloc_skb(cur_mru, gfp_t);
if (!skb)
return -ENOMEM;
read_lock_bh(&mhi_netdev->pm_lock);
if (unlikely(!mhi_netdev->enabled)) {
MSG_ERR("Interface not enabled\n");
ret = -EIO;
goto error_queue;
}
skb_priv = (struct mhi_skb_priv *)skb->cb;
skb_priv->buf = skb->data;
skb_priv->size = cur_mru;
skb_priv->mhi_netdev = mhi_netdev;
skb->dev = mhi_netdev->ndev;
if (mhi_netdev->recycle_buf)
skb->destructor = mhi_netdev_skb_destructor;
spin_lock_bh(&mhi_netdev->rx_lock);
ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, skb,
skb_priv->size, MHI_EOT);
spin_unlock_bh(&mhi_netdev->rx_lock);
if (ret) {
MSG_ERR("Failed to queue skb, ret:%d\n", ret);
ret = -EIO;
goto error_queue;
}
read_unlock_bh(&mhi_netdev->pm_lock);
}
return 0;
error_queue:
skb->destructor = NULL;
read_unlock_bh(&mhi_netdev->pm_lock);
dev_kfree_skb_any(skb);
return ret;
}
static void mhi_netdev_alloc_work(struct work_struct *work)
{
struct mhi_netdev *mhi_netdev = container_of(work, struct mhi_netdev,
alloc_work);
/* sleep about 1 sec and retry, that should be enough time
* for system to reclaim freed memory back.
*/
const int sleep_ms = 1000;
int retry = 60;
int ret;
MSG_LOG("Entered\n");
do {
ret = mhi_netdev_alloc_skb(mhi_netdev, GFP_KERNEL);
/* sleep and try again */
if (ret == -ENOMEM) {
msleep(sleep_ms);
retry--;
}
} while (ret == -ENOMEM && retry);
MSG_LOG("Exit with status:%d retry:%d\n", ret, retry);
}
/* we will recycle buffers */
static int mhi_netdev_skb_recycle(struct mhi_netdev *mhi_netdev, gfp_t gfp_t)
{
struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
int no_tre;
int ret = 0;
struct sk_buff *skb;
struct mhi_skb_priv *skb_priv;
read_lock_bh(&mhi_netdev->pm_lock);
if (!mhi_netdev->enabled) {
read_unlock_bh(&mhi_netdev->pm_lock);
return -EIO;
}
no_tre = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
spin_lock_bh(&mhi_netdev->rx_lock);
while (no_tre) {
skb = skb_dequeue(&mhi_netdev->rx_allocated);
/* no free buffers to recycle, reschedule work */
if (!skb) {
ret = -ENOMEM;
goto error_queue;
}
skb_priv = (struct mhi_skb_priv *)(skb->cb);
ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, skb,
skb_priv->size, MHI_EOT);
/* failed to queue buffer */
if (ret) {
MSG_ERR("Failed to queue skb, ret:%d\n", ret);
skb_queue_tail(&mhi_netdev->rx_allocated, skb);
goto error_queue;
}
no_tre--;
}
error_queue:
spin_unlock_bh(&mhi_netdev->rx_lock);
read_unlock_bh(&mhi_netdev->pm_lock);
return ret;
}
static void mhi_netdev_dealloc(struct mhi_netdev *mhi_netdev)
{
struct sk_buff *skb;
skb = skb_dequeue(&mhi_netdev->rx_allocated);
while (skb) {
skb->destructor = NULL;
kfree_skb(skb);
skb = skb_dequeue(&mhi_netdev->rx_allocated);
}
}
static int mhi_netdev_poll(struct napi_struct *napi, int budget)
{
struct net_device *dev = napi->dev;
struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
int rx_work = 0;
int ret;
MSG_VERB("Entered\n");
read_lock_bh(&mhi_netdev->pm_lock);
if (!mhi_netdev->enabled) {
MSG_LOG("interface is disabled!\n");
napi_complete(napi);
read_unlock_bh(&mhi_netdev->pm_lock);
return 0;
}
rx_work = mhi_poll(mhi_dev, budget);
if (rx_work < 0) {
MSG_ERR("Error polling ret:%d\n", rx_work);
rx_work = 0;
napi_complete(napi);
goto exit_poll;
}
/* queue new buffers */
ret = mhi_netdev->rx_queue(mhi_netdev, GFP_ATOMIC);
if (ret == -ENOMEM) {
MSG_LOG("out of tre, queuing bg worker\n");
mhi_netdev->stats.alloc_failed++;
schedule_work(&mhi_netdev->alloc_work);
}
/* complete work if # of packet processed less than allocated budget */
if (rx_work < budget)
napi_complete(napi);
else
mhi_netdev->stats.rx_budget_overflow++;
exit_poll:
read_unlock_bh(&mhi_netdev->pm_lock);
MSG_VERB("polled %d pkts\n", rx_work);
return rx_work;
}
static int mhi_netdev_open(struct net_device *dev)
{
struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
MSG_LOG("Opened net dev interface\n");
/* tx queue may not necessarily be stopped already
* so stop the queue if tx path is not enabled
*/
if (!mhi_dev->ul_chan)
netif_stop_queue(dev);
else
netif_start_queue(dev);
return 0;
}
static int mhi_netdev_change_mtu(struct net_device *dev, int new_mtu)
{
struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
if (new_mtu < 0 || mhi_dev->mtu < new_mtu)
return -EINVAL;
dev->mtu = new_mtu;
return 0;
}
static int mhi_netdev_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
int res = 0;
struct mhi_skb_priv *tx_priv;
MSG_VERB("Entered\n");
tx_priv = (struct mhi_skb_priv *)(skb->cb);
tx_priv->mhi_netdev = mhi_netdev;
read_lock_bh(&mhi_netdev->pm_lock);
if (unlikely(!mhi_netdev->enabled)) {
/* Only reason interface could be disabled and we get data
* is due to an SSR. We do not want to stop the queue and
* return error. Instead we will flush all the uplink packets
* and return successful
*/
res = NETDEV_TX_OK;
dev_kfree_skb_any(skb);
goto mhi_xmit_exit;
}
res = mhi_queue_transfer(mhi_dev, DMA_TO_DEVICE, skb, skb->len,
MHI_EOT);
if (res) {
MSG_VERB("Failed to queue with reason:%d\n", res);
netif_stop_queue(dev);
mhi_netdev->stats.tx_full++;
res = NETDEV_TX_BUSY;
goto mhi_xmit_exit;
}
mhi_netdev->stats.tx_pkts++;
mhi_xmit_exit:
read_unlock_bh(&mhi_netdev->pm_lock);
MSG_VERB("Exited\n");
return res;
}
static int mhi_netdev_ioctl_extended(struct net_device *dev, struct ifreq *ifr)
{
struct rmnet_ioctl_extended_s ext_cmd;
int rc = 0;
struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
rc = copy_from_user(&ext_cmd, ifr->ifr_ifru.ifru_data,
sizeof(struct rmnet_ioctl_extended_s));
if (rc)
return rc;
switch (ext_cmd.extended_ioctl) {
case RMNET_IOCTL_SET_MRU:
if (!ext_cmd.u.data || ext_cmd.u.data > mhi_dev->mtu) {
MSG_ERR("Can't set MRU, value:%u is invalid max:%zu\n",
ext_cmd.u.data, mhi_dev->mtu);
return -EINVAL;
}
if (!mhi_netdev->recycle_buf) {
MSG_LOG("MRU change request to 0x%x\n", ext_cmd.u.data);
mhi_netdev->mru = ext_cmd.u.data;
}
break;
case RMNET_IOCTL_GET_SUPPORTED_FEATURES:
ext_cmd.u.data = 0;
break;
case RMNET_IOCTL_GET_DRIVER_NAME:
strlcpy(ext_cmd.u.if_name, mhi_netdev->interface_name,
sizeof(ext_cmd.u.if_name));
break;
case RMNET_IOCTL_SET_SLEEP_STATE:
read_lock_bh(&mhi_netdev->pm_lock);
if (mhi_netdev->enabled) {
if (ext_cmd.u.data && mhi_netdev->wake) {
/* Request to enable LPM */
MSG_VERB("Enable MHI LPM");
mhi_netdev->wake--;
mhi_device_put(mhi_dev);
} else if (!ext_cmd.u.data && !mhi_netdev->wake) {
/* Request to disable LPM */
MSG_VERB("Disable MHI LPM");
mhi_netdev->wake++;
mhi_device_get(mhi_dev);
}
} else {
MSG_ERR("Cannot set LPM value, MHI is not up.\n");
read_unlock_bh(&mhi_netdev->pm_lock);
return -ENODEV;
}
read_unlock_bh(&mhi_netdev->pm_lock);
break;
default:
rc = -EINVAL;
break;
}
rc = copy_to_user(ifr->ifr_ifru.ifru_data, &ext_cmd,
sizeof(struct rmnet_ioctl_extended_s));
return rc;
}
static int mhi_netdev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
{
int rc = 0;
struct rmnet_ioctl_data_s ioctl_data;
switch (cmd) {
case RMNET_IOCTL_SET_LLP_IP: /* set RAWIP protocol */
break;
case RMNET_IOCTL_GET_LLP: /* get link protocol state */
ioctl_data.u.operation_mode = RMNET_MODE_LLP_IP;
if (copy_to_user(ifr->ifr_ifru.ifru_data, &ioctl_data,
sizeof(struct rmnet_ioctl_data_s)))
rc = -EFAULT;
break;
case RMNET_IOCTL_GET_OPMODE: /* get operation mode */
ioctl_data.u.operation_mode = RMNET_MODE_LLP_IP;
if (copy_to_user(ifr->ifr_ifru.ifru_data, &ioctl_data,
sizeof(struct rmnet_ioctl_data_s)))
rc = -EFAULT;
break;
case RMNET_IOCTL_SET_QOS_ENABLE:
rc = -EINVAL;
break;
case RMNET_IOCTL_SET_QOS_DISABLE:
rc = 0;
break;
case RMNET_IOCTL_OPEN:
case RMNET_IOCTL_CLOSE:
/* we just ignore them and return success */
rc = 0;
break;
case RMNET_IOCTL_EXTENDED:
rc = mhi_netdev_ioctl_extended(dev, ifr);
break;
default:
/* don't fail any IOCTL right now */
rc = 0;
break;
}
return rc;
}
static const struct net_device_ops mhi_netdev_ops_ip = {
.ndo_open = mhi_netdev_open,
.ndo_start_xmit = mhi_netdev_xmit,
.ndo_do_ioctl = mhi_netdev_ioctl,
.ndo_change_mtu = mhi_netdev_change_mtu,
.ndo_set_mac_address = 0,
.ndo_validate_addr = 0,
};
static void mhi_netdev_setup(struct net_device *dev)
{
dev->netdev_ops = &mhi_netdev_ops_ip;
ether_setup(dev);
/* set this after calling ether_setup */
dev->header_ops = 0; /* No header */
dev->type = ARPHRD_RAWIP;
dev->hard_header_len = 0;
dev->addr_len = 0;
dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);
dev->watchdog_timeo = WATCHDOG_TIMEOUT;
}
/* enable mhi_netdev netdev, call only after grabbing mhi_netdev.mutex */
static int mhi_netdev_enable_iface(struct mhi_netdev *mhi_netdev)
{
int ret = 0;
char ifalias[IFALIASZ];
char ifname[IFNAMSIZ];
struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
int no_tre;
MSG_LOG("Prepare the channels for transfer\n");
ret = mhi_prepare_for_transfer(mhi_dev);
if (ret) {
MSG_ERR("Failed to start TX chan ret %d\n", ret);
goto mhi_failed_to_start;
}
/* first time enabling the node */
if (!mhi_netdev->ndev) {
struct mhi_netdev_priv *mhi_netdev_priv;
snprintf(ifalias, sizeof(ifalias), "%s_%04x_%02u.%02u.%02u_%u",
mhi_netdev->interface_name, mhi_dev->dev_id,
mhi_dev->domain, mhi_dev->bus, mhi_dev->slot,
mhi_netdev->alias);
snprintf(ifname, sizeof(ifname), "%s%%d",
mhi_netdev->interface_name);
rtnl_lock();
mhi_netdev->ndev = alloc_netdev(sizeof(*mhi_netdev_priv),
ifname, NET_NAME_PREDICTABLE,
mhi_netdev_setup);
if (!mhi_netdev->ndev) {
ret = -ENOMEM;
rtnl_unlock();
goto net_dev_alloc_fail;
}
mhi_netdev->ndev->mtu = mhi_dev->mtu;
SET_NETDEV_DEV(mhi_netdev->ndev, &mhi_dev->dev);
dev_set_alias(mhi_netdev->ndev, ifalias, strlen(ifalias));
mhi_netdev_priv = netdev_priv(mhi_netdev->ndev);
mhi_netdev_priv->mhi_netdev = mhi_netdev;
rtnl_unlock();
netif_napi_add(mhi_netdev->ndev, &mhi_netdev->napi,
mhi_netdev_poll, NAPI_POLL_WEIGHT);
ret = register_netdev(mhi_netdev->ndev);
if (ret) {
MSG_ERR("Network device registration failed\n");
goto net_dev_reg_fail;
}
skb_queue_head_init(&mhi_netdev->rx_allocated);
}
write_lock_irq(&mhi_netdev->pm_lock);
mhi_netdev->enabled = true;
write_unlock_irq(&mhi_netdev->pm_lock);
/* queue buffer for rx path */
no_tre = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
ret = mhi_netdev_alloc_skb(mhi_netdev, GFP_KERNEL);
if (ret)
schedule_work(&mhi_netdev->alloc_work);
/* if we recycle prepare one more set */
if (mhi_netdev->recycle_buf)
for (; no_tre >= 0; no_tre--) {
struct sk_buff *skb = alloc_skb(mhi_netdev->mru,
GFP_KERNEL);
struct mhi_skb_priv *skb_priv;
if (!skb)
break;
skb_priv = (struct mhi_skb_priv *)skb->cb;
skb_priv->buf = skb->data;
skb_priv->size = mhi_netdev->mru;
skb_priv->mhi_netdev = mhi_netdev;
skb->dev = mhi_netdev->ndev;
skb->destructor = mhi_netdev_skb_destructor;
skb_queue_tail(&mhi_netdev->rx_allocated, skb);
}
napi_enable(&mhi_netdev->napi);
MSG_LOG("Exited.\n");
return 0;
net_dev_reg_fail:
netif_napi_del(&mhi_netdev->napi);
free_netdev(mhi_netdev->ndev);
mhi_netdev->ndev = NULL;
net_dev_alloc_fail:
mhi_unprepare_from_transfer(mhi_dev);
mhi_failed_to_start:
MSG_ERR("Exited ret %d.\n", ret);
return ret;
}
static void mhi_netdev_xfer_ul_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
struct mhi_netdev *mhi_netdev = mhi_device_get_devdata(mhi_dev);
struct sk_buff *skb = mhi_result->buf_addr;
struct net_device *ndev = mhi_netdev->ndev;
ndev->stats.tx_packets++;
ndev->stats.tx_bytes += skb->len;
dev_kfree_skb(skb);
if (netif_queue_stopped(ndev))
netif_wake_queue(ndev);
}
static int mhi_netdev_process_fragment(struct mhi_netdev *mhi_netdev,
struct sk_buff *skb)
{
struct sk_buff *temp_skb;
if (mhi_netdev->frag_skb) {
/* merge the new skb into the old fragment */
temp_skb = skb_copy_expand(mhi_netdev->frag_skb, 0, skb->len,
GFP_ATOMIC);
if (!temp_skb) {
dev_kfree_skb(mhi_netdev->frag_skb);
mhi_netdev->frag_skb = NULL;
return -ENOMEM;
}
dev_kfree_skb_any(mhi_netdev->frag_skb);
mhi_netdev->frag_skb = temp_skb;
memcpy(skb_put(mhi_netdev->frag_skb, skb->len), skb->data,
skb->len);
} else {
mhi_netdev->frag_skb = skb_copy(skb, GFP_ATOMIC);
if (!mhi_netdev->frag_skb)
return -ENOMEM;
}
mhi_netdev->stats.rx_frag++;
return 0;
}
static void mhi_netdev_xfer_dl_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
struct mhi_netdev *mhi_netdev = mhi_device_get_devdata(mhi_dev);
struct sk_buff *skb = mhi_result->buf_addr;
struct net_device *dev = mhi_netdev->ndev;
int ret = 0;
if (mhi_result->transaction_status == -ENOTCONN) {
dev_kfree_skb(skb);
return;
}
skb_put(skb, mhi_result->bytes_xferd);
dev->stats.rx_packets++;
dev->stats.rx_bytes += mhi_result->bytes_xferd;
/* merge skb's together, it's a chain transfer */
if (mhi_result->transaction_status == -EOVERFLOW ||
mhi_netdev->frag_skb) {
ret = mhi_netdev_process_fragment(mhi_netdev, skb);
/* recycle the skb */
if (mhi_netdev->recycle_buf)
mhi_netdev_skb_destructor(skb);
else
dev_kfree_skb(skb);
if (ret)
return;
}
/* more data will come, don't submit the buffer */
if (mhi_result->transaction_status == -EOVERFLOW)
return;
if (mhi_netdev->frag_skb) {
skb = mhi_netdev->frag_skb;
skb->dev = dev;
mhi_netdev->frag_skb = NULL;
}
skb->protocol = mhi_netdev_ip_type_trans(skb);
netif_receive_skb(skb);
}
static void mhi_netdev_status_cb(struct mhi_device *mhi_dev, enum MHI_CB mhi_cb)
{
struct mhi_netdev *mhi_netdev = mhi_device_get_devdata(mhi_dev);
if (mhi_cb != MHI_CB_PENDING_DATA)
return;
if (napi_schedule_prep(&mhi_netdev->napi)) {
__napi_schedule(&mhi_netdev->napi);
mhi_netdev->stats.rx_int++;
return;
}
}
#ifdef CONFIG_DEBUG_FS
struct dentry *dentry;
static int mhi_netdev_debugfs_trigger_reset(void *data, u64 val)
{
struct mhi_netdev *mhi_netdev = data;
struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
int ret;
MSG_LOG("Triggering channel reset\n");
/* disable the interface so no data processing */
write_lock_irq(&mhi_netdev->pm_lock);
mhi_netdev->enabled = false;
write_unlock_irq(&mhi_netdev->pm_lock);
napi_disable(&mhi_netdev->napi);
/* disable all hardware channels */
mhi_unprepare_from_transfer(mhi_dev);
/* clean up all alocated buffers */
mhi_netdev_dealloc(mhi_netdev);
MSG_LOG("Restarting iface\n");
ret = mhi_netdev_enable_iface(mhi_netdev);
if (ret)
return ret;
return 0;
}
DEFINE_DEBUGFS_ATTRIBUTE(mhi_netdev_debugfs_trigger_reset_fops, NULL,
mhi_netdev_debugfs_trigger_reset, "%llu\n");
static void mhi_netdev_create_debugfs(struct mhi_netdev *mhi_netdev)
{
char node_name[32];
int i;
const umode_t mode = 0600;
struct dentry *file;
struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
const struct {
char *name;
u32 *ptr;
} debugfs_table[] = {
{
"rx_int",
&mhi_netdev->stats.rx_int
},
{
"tx_full",
&mhi_netdev->stats.tx_full
},
{
"tx_pkts",
&mhi_netdev->stats.tx_pkts
},
{
"rx_budget_overflow",
&mhi_netdev->stats.rx_budget_overflow
},
{
"rx_fragmentation",
&mhi_netdev->stats.rx_frag
},
{
"alloc_failed",
&mhi_netdev->stats.alloc_failed
},
{
NULL, NULL
},
};
/* Both tx & rx client handle contain same device info */
snprintf(node_name, sizeof(node_name), "%s_%04x_%02u.%02u.%02u_%u",
mhi_netdev->interface_name, mhi_dev->dev_id, mhi_dev->domain,
mhi_dev->bus, mhi_dev->slot, mhi_netdev->alias);
if (IS_ERR_OR_NULL(dentry))
return;
mhi_netdev->dentry = debugfs_create_dir(node_name, dentry);
if (IS_ERR_OR_NULL(mhi_netdev->dentry))
return;
file = debugfs_create_u32("msg_lvl", mode, mhi_netdev->dentry,
(u32 *)&mhi_netdev->msg_lvl);
if (IS_ERR_OR_NULL(file))
return;
/* Add debug stats table */
for (i = 0; debugfs_table[i].name; i++) {
file = debugfs_create_u32(debugfs_table[i].name, mode,
mhi_netdev->dentry,
debugfs_table[i].ptr);
if (IS_ERR_OR_NULL(file))
return;
}
debugfs_create_file_unsafe("reset", mode, mhi_netdev->dentry,
mhi_netdev,
&mhi_netdev_debugfs_trigger_reset_fops);
}
static void mhi_netdev_create_debugfs_dir(void)
{
dentry = debugfs_create_dir(MHI_NETDEV_DRIVER_NAME, 0);
}
#else
static void mhi_netdev_create_debugfs(struct mhi_netdev_private *mhi_netdev)
{
}
static void mhi_netdev_create_debugfs_dir(void)
{
}
#endif
static void mhi_netdev_remove(struct mhi_device *mhi_dev)
{
struct mhi_netdev *mhi_netdev = mhi_device_get_devdata(mhi_dev);
MSG_LOG("Remove notification received\n");
write_lock_irq(&mhi_netdev->pm_lock);
mhi_netdev->enabled = false;
write_unlock_irq(&mhi_netdev->pm_lock);
napi_disable(&mhi_netdev->napi);
netif_napi_del(&mhi_netdev->napi);
mhi_netdev_dealloc(mhi_netdev);
unregister_netdev(mhi_netdev->ndev);
free_netdev(mhi_netdev->ndev);
flush_work(&mhi_netdev->alloc_work);
if (!IS_ERR_OR_NULL(mhi_netdev->dentry))
debugfs_remove_recursive(mhi_netdev->dentry);
}
static int mhi_netdev_probe(struct mhi_device *mhi_dev,
const struct mhi_device_id *id)
{
int ret;
struct mhi_netdev *mhi_netdev;
struct device_node *of_node = mhi_dev->dev.of_node;
char node_name[32];
if (!of_node)
return -ENODEV;
mhi_netdev = devm_kzalloc(&mhi_dev->dev, sizeof(*mhi_netdev),
GFP_KERNEL);
if (!mhi_netdev)
return -ENOMEM;
mhi_netdev->alias = of_alias_get_id(of_node, "mhi_netdev");
if (mhi_netdev->alias < 0)
return -ENODEV;
mhi_netdev->mhi_dev = mhi_dev;
mhi_device_set_devdata(mhi_dev, mhi_netdev);
ret = of_property_read_u32(of_node, "mhi,mru", &mhi_netdev->mru);
if (ret)
return -ENODEV;
ret = of_property_read_string(of_node, "mhi,interface-name",
&mhi_netdev->interface_name);
if (ret)
mhi_netdev->interface_name = mhi_netdev_driver.driver.name;
mhi_netdev->recycle_buf = of_property_read_bool(of_node,
"mhi,recycle-buf");
mhi_netdev->rx_queue = mhi_netdev->recycle_buf ?
mhi_netdev_skb_recycle : mhi_netdev_alloc_skb;
spin_lock_init(&mhi_netdev->rx_lock);
rwlock_init(&mhi_netdev->pm_lock);
INIT_WORK(&mhi_netdev->alloc_work, mhi_netdev_alloc_work);
/* create ipc log buffer */
snprintf(node_name, sizeof(node_name), "%s_%04x_%02u.%02u.%02u_%u",
mhi_netdev->interface_name, mhi_dev->dev_id, mhi_dev->domain,
mhi_dev->bus, mhi_dev->slot, mhi_netdev->alias);
mhi_netdev->ipc_log = ipc_log_context_create(IPC_LOG_PAGES, node_name,
0);
mhi_netdev->msg_lvl = MHI_MSG_LVL_ERROR;
mhi_netdev->ipc_log_lvl = IPC_LOG_LVL;
/* setup network interface */
ret = mhi_netdev_enable_iface(mhi_netdev);
if (ret)
return ret;
mhi_netdev_create_debugfs(mhi_netdev);
return 0;
}
static const struct mhi_device_id mhi_netdev_match_table[] = {
{ .chan = "IP_HW0" },
{ .chan = "IP_HW_ADPL" },
{},
};
static struct mhi_driver mhi_netdev_driver = {
.id_table = mhi_netdev_match_table,
.probe = mhi_netdev_probe,
.remove = mhi_netdev_remove,
.ul_xfer_cb = mhi_netdev_xfer_ul_cb,
.dl_xfer_cb = mhi_netdev_xfer_dl_cb,
.status_cb = mhi_netdev_status_cb,
.driver = {
.name = "mhi_netdev",
.owner = THIS_MODULE,
}
};
static int __init mhi_netdev_init(void)
{
mhi_netdev_create_debugfs_dir();
return mhi_driver_register(&mhi_netdev_driver);
}
module_init(mhi_netdev_init);
MODULE_DESCRIPTION("MHI NETDEV Network Interface");
MODULE_LICENSE("GPL v2");

View file

@ -0,0 +1,689 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2018, The Linux Foundation. All rights reserved.*/
#include <linux/cdev.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/errno.h>
#include <linux/fs.h>
#include <linux/ipc_logging.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/poll.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/wait.h>
#include <linux/uaccess.h>
#include <linux/mhi.h>
#define DEVICE_NAME "mhi"
#define MHI_UCI_DRIVER_NAME "mhi_uci"
struct uci_chan {
wait_queue_head_t wq;
spinlock_t lock;
struct list_head pending; /* user space waiting to read */
struct uci_buf *cur_buf; /* current buffer user space reading */
size_t rx_size;
};
struct uci_buf {
void *data;
size_t len;
struct list_head node;
};
struct uci_dev {
struct list_head node;
dev_t devt;
struct device *dev;
struct mhi_device *mhi_dev;
const char *chan;
struct mutex mutex; /* sync open and close */
struct uci_chan ul_chan;
struct uci_chan dl_chan;
size_t mtu;
int ref_count;
bool enabled;
void *ipc_log;
};
struct mhi_uci_drv {
struct list_head head;
struct mutex lock;
struct class *class;
int major;
dev_t dev_t;
};
enum MHI_DEBUG_LEVEL msg_lvl = MHI_MSG_LVL_ERROR;
#ifdef CONFIG_MHI_DEBUG
#define IPC_LOG_LVL (MHI_MSG_LVL_VERBOSE)
#define MHI_UCI_IPC_LOG_PAGES (25)
#else
#define IPC_LOG_LVL (MHI_MSG_LVL_ERROR)
#define MHI_UCI_IPC_LOG_PAGES (1)
#endif
#ifdef CONFIG_MHI_DEBUG
#define MSG_VERB(fmt, ...) do { \
if (msg_lvl <= MHI_MSG_LVL_VERBOSE) \
pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
if (uci_dev->ipc_log && (IPC_LOG_LVL <= MHI_MSG_LVL_VERBOSE)) \
ipc_log_string(uci_dev->ipc_log, "[D][%s] " fmt, \
__func__, ##__VA_ARGS__); \
} while (0)
#else
#define MSG_VERB(fmt, ...)
#endif
#define MSG_LOG(fmt, ...) do { \
if (msg_lvl <= MHI_MSG_LVL_INFO) \
pr_err("[I][%s] " fmt, __func__, ##__VA_ARGS__); \
if (uci_dev->ipc_log && (IPC_LOG_LVL <= MHI_MSG_LVL_INFO)) \
ipc_log_string(uci_dev->ipc_log, "[I][%s] " fmt, \
__func__, ##__VA_ARGS__); \
} while (0)
#define MSG_ERR(fmt, ...) do { \
if (msg_lvl <= MHI_MSG_LVL_ERROR) \
pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
if (uci_dev->ipc_log && (IPC_LOG_LVL <= MHI_MSG_LVL_ERROR)) \
ipc_log_string(uci_dev->ipc_log, "[E][%s] " fmt, \
__func__, ##__VA_ARGS__); \
} while (0)
#define MAX_UCI_DEVICES (64)
static DECLARE_BITMAP(uci_minors, MAX_UCI_DEVICES);
static struct mhi_uci_drv mhi_uci_drv;
static int mhi_queue_inbound(struct uci_dev *uci_dev)
{
struct mhi_device *mhi_dev = uci_dev->mhi_dev;
int nr_trbs = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
size_t mtu = uci_dev->mtu;
void *buf;
struct uci_buf *uci_buf;
int ret = -EIO, i;
for (i = 0; i < nr_trbs; i++) {
buf = kmalloc(mtu + sizeof(*uci_buf), GFP_KERNEL);
if (!buf)
return -ENOMEM;
uci_buf = buf + mtu;
uci_buf->data = buf;
MSG_VERB("Allocated buf %d of %d size %ld\n", i, nr_trbs, mtu);
ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, buf, mtu,
MHI_EOT);
if (ret) {
kfree(buf);
MSG_ERR("Failed to queue buffer %d\n", i);
return ret;
}
}
return ret;
}
static long mhi_uci_ioctl(struct file *file,
unsigned int cmd,
unsigned long arg)
{
struct uci_dev *uci_dev = file->private_data;
struct mhi_device *mhi_dev = uci_dev->mhi_dev;
long ret = -ERESTARTSYS;
mutex_lock(&uci_dev->mutex);
if (uci_dev->enabled)
ret = mhi_ioctl(mhi_dev, cmd, arg);
mutex_unlock(&uci_dev->mutex);
return ret;
}
static int mhi_uci_release(struct inode *inode, struct file *file)
{
struct uci_dev *uci_dev = file->private_data;
mutex_lock(&uci_dev->mutex);
uci_dev->ref_count--;
if (!uci_dev->ref_count) {
struct uci_buf *itr, *tmp;
struct uci_chan *uci_chan;
MSG_LOG("Last client left, closing node\n");
if (uci_dev->enabled)
mhi_unprepare_from_transfer(uci_dev->mhi_dev);
/* clean inbound channel */
uci_chan = &uci_dev->dl_chan;
list_for_each_entry_safe(itr, tmp, &uci_chan->pending, node) {
list_del(&itr->node);
kfree(itr->data);
}
if (uci_chan->cur_buf)
kfree(uci_chan->cur_buf->data);
uci_chan->cur_buf = NULL;
if (!uci_dev->enabled) {
MSG_LOG("Node is deleted, freeing dev node\n");
mutex_unlock(&uci_dev->mutex);
mutex_destroy(&uci_dev->mutex);
clear_bit(MINOR(uci_dev->devt), uci_minors);
kfree(uci_dev);
return 0;
}
}
mutex_unlock(&uci_dev->mutex);
MSG_LOG("exit: ref_count:%d\n", uci_dev->ref_count);
return 0;
}
static unsigned int mhi_uci_poll(struct file *file, poll_table *wait)
{
struct uci_dev *uci_dev = file->private_data;
struct mhi_device *mhi_dev = uci_dev->mhi_dev;
struct uci_chan *uci_chan;
unsigned int mask = 0;
poll_wait(file, &uci_dev->dl_chan.wq, wait);
poll_wait(file, &uci_dev->ul_chan.wq, wait);
uci_chan = &uci_dev->dl_chan;
spin_lock_bh(&uci_chan->lock);
if (!uci_dev->enabled) {
mask = POLLERR;
} else if (!list_empty(&uci_chan->pending) || uci_chan->cur_buf) {
MSG_VERB("Client can read from node\n");
mask |= POLLIN | POLLRDNORM;
}
spin_unlock_bh(&uci_chan->lock);
uci_chan = &uci_dev->ul_chan;
spin_lock_bh(&uci_chan->lock);
if (!uci_dev->enabled) {
mask |= POLLERR;
} else if (mhi_get_no_free_descriptors(mhi_dev, DMA_TO_DEVICE) > 0) {
MSG_VERB("Client can write to node\n");
mask |= POLLOUT | POLLWRNORM;
}
spin_unlock_bh(&uci_chan->lock);
MSG_LOG("Client attempted to poll, returning mask 0x%x\n", mask);
return mask;
}
static ssize_t mhi_uci_write(struct file *file,
const char __user *buf,
size_t count,
loff_t *offp)
{
struct uci_dev *uci_dev = file->private_data;
struct mhi_device *mhi_dev = uci_dev->mhi_dev;
struct uci_chan *uci_chan = &uci_dev->ul_chan;
size_t bytes_xfered = 0;
int ret, nr_avail;
if (!buf || !count)
return -EINVAL;
/* confirm channel is active */
spin_lock_bh(&uci_chan->lock);
if (!uci_dev->enabled) {
spin_unlock_bh(&uci_chan->lock);
return -ERESTARTSYS;
}
MSG_VERB("Enter: to xfer:%lu bytes\n", count);
while (count) {
size_t xfer_size;
void *kbuf;
enum MHI_FLAGS flags;
spin_unlock_bh(&uci_chan->lock);
/* wait for free descriptors */
ret = wait_event_interruptible(uci_chan->wq,
(!uci_dev->enabled) ||
(nr_avail = mhi_get_no_free_descriptors(mhi_dev,
DMA_TO_DEVICE)) > 0);
if (ret == -ERESTARTSYS || !uci_dev->enabled) {
MSG_LOG("Exit signal caught for node or not enabled\n");
return -ERESTARTSYS;
}
xfer_size = min_t(size_t, count, uci_dev->mtu);
kbuf = kmalloc(xfer_size, GFP_KERNEL);
if (!kbuf) {
MSG_ERR("Failed to allocate memory %lu\n", xfer_size);
return -ENOMEM;
}
ret = copy_from_user(kbuf, buf, xfer_size);
if (unlikely(ret)) {
kfree(kbuf);
return ret;
}
spin_lock_bh(&uci_chan->lock);
/* if ring is full after this force EOT */
if (nr_avail > 1 && (count - xfer_size))
flags = MHI_CHAIN;
else
flags = MHI_EOT;
if (uci_dev->enabled)
ret = mhi_queue_transfer(mhi_dev, DMA_TO_DEVICE, kbuf,
xfer_size, flags);
else
ret = -ERESTARTSYS;
if (ret) {
kfree(kbuf);
goto sys_interrupt;
}
bytes_xfered += xfer_size;
count -= xfer_size;
buf += xfer_size;
}
spin_unlock_bh(&uci_chan->lock);
MSG_VERB("Exit: Number of bytes xferred:%lu\n", bytes_xfered);
return bytes_xfered;
sys_interrupt:
spin_unlock_bh(&uci_chan->lock);
return ret;
}
static ssize_t mhi_uci_read(struct file *file,
char __user *buf,
size_t count,
loff_t *ppos)
{
struct uci_dev *uci_dev = file->private_data;
struct mhi_device *mhi_dev = uci_dev->mhi_dev;
struct uci_chan *uci_chan = &uci_dev->dl_chan;
struct uci_buf *uci_buf;
char *ptr;
size_t to_copy;
int ret = 0;
if (!buf)
return -EINVAL;
MSG_VERB("Client provided buf len:%lu\n", count);
/* confirm channel is active */
spin_lock_bh(&uci_chan->lock);
if (!uci_dev->enabled) {
spin_unlock_bh(&uci_chan->lock);
return -ERESTARTSYS;
}
/* No data available to read, wait */
if (!uci_chan->cur_buf && list_empty(&uci_chan->pending)) {
MSG_VERB("No data available to read waiting\n");
spin_unlock_bh(&uci_chan->lock);
ret = wait_event_interruptible(uci_chan->wq,
(!uci_dev->enabled ||
!list_empty(&uci_chan->pending)));
if (ret == -ERESTARTSYS) {
MSG_LOG("Exit signal caught for node\n");
return -ERESTARTSYS;
}
spin_lock_bh(&uci_chan->lock);
if (!uci_dev->enabled) {
MSG_LOG("node is disabled\n");
ret = -ERESTARTSYS;
goto read_error;
}
}
/* new read, get the next descriptor from the list */
if (!uci_chan->cur_buf) {
uci_buf = list_first_entry_or_null(&uci_chan->pending,
struct uci_buf, node);
if (unlikely(!uci_buf)) {
ret = -EIO;
goto read_error;
}
list_del(&uci_buf->node);
uci_chan->cur_buf = uci_buf;
uci_chan->rx_size = uci_buf->len;
MSG_VERB("Got pkt of size:%zu\n", uci_chan->rx_size);
}
uci_buf = uci_chan->cur_buf;
spin_unlock_bh(&uci_chan->lock);
/* Copy the buffer to user space */
to_copy = min_t(size_t, count, uci_chan->rx_size);
ptr = uci_buf->data + (uci_buf->len - uci_chan->rx_size);
ret = copy_to_user(buf, ptr, to_copy);
if (ret)
return ret;
MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
uci_chan->rx_size -= to_copy;
/* we finished with this buffer, queue it back to hardware */
if (!uci_chan->rx_size) {
spin_lock_bh(&uci_chan->lock);
uci_chan->cur_buf = NULL;
if (uci_dev->enabled)
ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE,
uci_buf->data, uci_dev->mtu,
MHI_EOT);
else
ret = -ERESTARTSYS;
if (ret) {
MSG_ERR("Failed to recycle element\n");
kfree(uci_buf->data);
goto read_error;
}
spin_unlock_bh(&uci_chan->lock);
}
MSG_VERB("Returning %lu bytes\n", to_copy);
return to_copy;
read_error:
spin_unlock_bh(&uci_chan->lock);
return ret;
}
static int mhi_uci_open(struct inode *inode, struct file *filp)
{
struct uci_dev *uci_dev;
int ret = -EIO;
struct uci_buf *buf_itr, *tmp;
struct uci_chan *dl_chan;
mutex_lock(&mhi_uci_drv.lock);
list_for_each_entry(uci_dev, &mhi_uci_drv.head, node) {
if (uci_dev->devt == inode->i_rdev) {
ret = 0;
break;
}
}
mutex_unlock(&mhi_uci_drv.lock);
/* could not find a minor node */
if (ret)
return ret;
mutex_lock(&uci_dev->mutex);
if (!uci_dev->enabled) {
MSG_ERR("Node exist, but not in active state!\n");
goto error_open_chan;
}
uci_dev->ref_count++;
MSG_LOG("Node open, ref counts %u\n", uci_dev->ref_count);
if (uci_dev->ref_count == 1) {
MSG_LOG("Starting channel\n");
ret = mhi_prepare_for_transfer(uci_dev->mhi_dev);
if (ret) {
MSG_ERR("Error starting transfer channels\n");
uci_dev->ref_count--;
goto error_open_chan;
}
ret = mhi_queue_inbound(uci_dev);
if (ret)
goto error_rx_queue;
}
filp->private_data = uci_dev;
mutex_unlock(&uci_dev->mutex);
return 0;
error_rx_queue:
dl_chan = &uci_dev->dl_chan;
mhi_unprepare_from_transfer(uci_dev->mhi_dev);
list_for_each_entry_safe(buf_itr, tmp, &dl_chan->pending, node) {
list_del(&buf_itr->node);
kfree(buf_itr->data);
}
error_open_chan:
mutex_unlock(&uci_dev->mutex);
return ret;
}
static const struct file_operations mhidev_fops = {
.open = mhi_uci_open,
.release = mhi_uci_release,
.read = mhi_uci_read,
.write = mhi_uci_write,
.poll = mhi_uci_poll,
.unlocked_ioctl = mhi_uci_ioctl,
};
static void mhi_uci_remove(struct mhi_device *mhi_dev)
{
struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
MSG_LOG("Enter\n");
/* disable the node */
mutex_lock(&uci_dev->mutex);
spin_lock_irq(&uci_dev->dl_chan.lock);
spin_lock_irq(&uci_dev->ul_chan.lock);
uci_dev->enabled = false;
spin_unlock_irq(&uci_dev->ul_chan.lock);
spin_unlock_irq(&uci_dev->dl_chan.lock);
wake_up(&uci_dev->dl_chan.wq);
wake_up(&uci_dev->ul_chan.wq);
/* delete the node to prevent new opens */
device_destroy(mhi_uci_drv.class, uci_dev->devt);
uci_dev->dev = NULL;
mutex_lock(&mhi_uci_drv.lock);
list_del(&uci_dev->node);
mutex_unlock(&mhi_uci_drv.lock);
/* safe to free memory only if all file nodes are closed */
if (!uci_dev->ref_count) {
mutex_unlock(&uci_dev->mutex);
mutex_destroy(&uci_dev->mutex);
clear_bit(MINOR(uci_dev->devt), uci_minors);
kfree(uci_dev);
return;
}
MSG_LOG("Exit\n");
mutex_unlock(&uci_dev->mutex);
}
static int mhi_uci_probe(struct mhi_device *mhi_dev,
const struct mhi_device_id *id)
{
struct uci_dev *uci_dev;
int minor;
char node_name[32];
int dir;
uci_dev = kzalloc(sizeof(*uci_dev), GFP_KERNEL);
if (!uci_dev)
return -ENOMEM;
mutex_init(&uci_dev->mutex);
uci_dev->mhi_dev = mhi_dev;
minor = find_first_zero_bit(uci_minors, MAX_UCI_DEVICES);
if (minor >= MAX_UCI_DEVICES) {
kfree(uci_dev);
return -ENOSPC;
}
mutex_lock(&uci_dev->mutex);
mutex_lock(&mhi_uci_drv.lock);
uci_dev->devt = MKDEV(mhi_uci_drv.major, minor);
uci_dev->dev = device_create(mhi_uci_drv.class, &mhi_dev->dev,
uci_dev->devt, uci_dev,
DEVICE_NAME "_%04x_%02u.%02u.%02u%s%d",
mhi_dev->dev_id, mhi_dev->domain,
mhi_dev->bus, mhi_dev->slot, "_pipe_",
mhi_dev->ul_chan_id);
set_bit(minor, uci_minors);
/* create debugging buffer */
snprintf(node_name, sizeof(node_name), "mhi_uci_%04x_%02u.%02u.%02u_%d",
mhi_dev->dev_id, mhi_dev->domain, mhi_dev->bus, mhi_dev->slot,
mhi_dev->ul_chan_id);
uci_dev->ipc_log = ipc_log_context_create(MHI_UCI_IPC_LOG_PAGES,
node_name, 0);
for (dir = 0; dir < 2; dir++) {
struct uci_chan *uci_chan = (dir) ?
&uci_dev->ul_chan : &uci_dev->dl_chan;
spin_lock_init(&uci_chan->lock);
init_waitqueue_head(&uci_chan->wq);
INIT_LIST_HEAD(&uci_chan->pending);
}
uci_dev->mtu = min_t(size_t, id->driver_data, mhi_dev->mtu);
mhi_device_set_devdata(mhi_dev, uci_dev);
uci_dev->enabled = true;
list_add(&uci_dev->node, &mhi_uci_drv.head);
mutex_unlock(&mhi_uci_drv.lock);
mutex_unlock(&uci_dev->mutex);
MSG_LOG("channel:%s successfully probed\n", mhi_dev->chan_name);
return 0;
};
static void mhi_ul_xfer_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
struct uci_chan *uci_chan = &uci_dev->ul_chan;
MSG_VERB("status:%d xfer_len:%zu\n", mhi_result->transaction_status,
mhi_result->bytes_xferd);
kfree(mhi_result->buf_addr);
if (!mhi_result->transaction_status)
wake_up(&uci_chan->wq);
}
static void mhi_dl_xfer_cb(struct mhi_device *mhi_dev,
struct mhi_result *mhi_result)
{
struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
struct uci_chan *uci_chan = &uci_dev->dl_chan;
unsigned long flags;
struct uci_buf *buf;
MSG_VERB("status:%d receive_len:%zu\n", mhi_result->transaction_status,
mhi_result->bytes_xferd);
if (mhi_result->transaction_status == -ENOTCONN) {
kfree(mhi_result->buf_addr);
return;
}
spin_lock_irqsave(&uci_chan->lock, flags);
buf = mhi_result->buf_addr + uci_dev->mtu;
buf->data = mhi_result->buf_addr;
buf->len = mhi_result->bytes_xferd;
list_add_tail(&buf->node, &uci_chan->pending);
spin_unlock_irqrestore(&uci_chan->lock, flags);
wake_up(&uci_chan->wq);
}
/* .driver_data stores max mtu */
static const struct mhi_device_id mhi_uci_match_table[] = {
{ .chan = "LOOPBACK", .driver_data = 0x1000 },
{ .chan = "SAHARA", .driver_data = 0x8000 },
{ .chan = "EFS", .driver_data = 0x1000 },
{ .chan = "QMI0", .driver_data = 0x1000 },
{ .chan = "QMI1", .driver_data = 0x1000 },
{ .chan = "TF", .driver_data = 0x1000 },
{ .chan = "DUN", .driver_data = 0x1000 },
{},
};
static struct mhi_driver mhi_uci_driver = {
.id_table = mhi_uci_match_table,
.remove = mhi_uci_remove,
.probe = mhi_uci_probe,
.ul_xfer_cb = mhi_ul_xfer_cb,
.dl_xfer_cb = mhi_dl_xfer_cb,
.driver = {
.name = MHI_UCI_DRIVER_NAME,
.owner = THIS_MODULE,
},
};
static int mhi_uci_init(void)
{
int ret;
ret = register_chrdev(0, MHI_UCI_DRIVER_NAME, &mhidev_fops);
if (ret < 0)
return ret;
mhi_uci_drv.major = ret;
mhi_uci_drv.class = class_create(THIS_MODULE, MHI_UCI_DRIVER_NAME);
if (IS_ERR(mhi_uci_drv.class))
return -ENODEV;
mutex_init(&mhi_uci_drv.lock);
INIT_LIST_HEAD(&mhi_uci_drv.head);
ret = mhi_driver_register(&mhi_uci_driver);
if (ret)
class_destroy(mhi_uci_drv.class);
return ret;
}
module_init(mhi_uci_init);
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("MHI_UCI");
MODULE_DESCRIPTION("MHI UCI Driver");

678
include/linux/mhi.h Normal file
View file

@ -0,0 +1,678 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2018, The Linux Foundation. All rights reserved. */
#ifndef _MHI_H_
#define _MHI_H_
struct mhi_chan;
struct mhi_event;
struct mhi_ctxt;
struct mhi_cmd;
struct image_info;
struct bhi_vec_entry;
struct mhi_timesync;
struct mhi_buf_info;
/**
* enum MHI_CB - MHI callback
* @MHI_CB_IDLE: MHI entered idle state
* @MHI_CB_PENDING_DATA: New data available for client to process
* @MHI_CB_LPM_ENTER: MHI host entered low power mode
* @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
* @MHI_CB_EE_RDDM: MHI device entered RDDM execution enviornment
*/
enum MHI_CB {
MHI_CB_IDLE,
MHI_CB_PENDING_DATA,
MHI_CB_LPM_ENTER,
MHI_CB_LPM_EXIT,
MHI_CB_EE_RDDM,
};
/**
* enum MHI_DEBUG_LEVL - various debugging level
*/
enum MHI_DEBUG_LEVEL {
MHI_MSG_LVL_VERBOSE,
MHI_MSG_LVL_INFO,
MHI_MSG_LVL_ERROR,
MHI_MSG_LVL_CRITICAL,
MHI_MSG_LVL_MASK_ALL,
};
/**
* enum MHI_FLAGS - Transfer flags
* @MHI_EOB: End of buffer for bulk transfer
* @MHI_EOT: End of transfer
* @MHI_CHAIN: Linked transfer
*/
enum MHI_FLAGS {
MHI_EOB,
MHI_EOT,
MHI_CHAIN,
};
/**
* enum mhi_device_type - Device types
* @MHI_XFER_TYPE: Handles data transfer
* @MHI_TIMESYNC_TYPE: Use for timesync feature
* @MHI_CONTROLLER_TYPE: Control device
*/
enum mhi_device_type {
MHI_XFER_TYPE,
MHI_TIMESYNC_TYPE,
MHI_CONTROLLER_TYPE,
};
/**
* enum mhi_ee - device current execution enviornment
* @MHI_EE_PBL - device in PBL
* @MHI_EE_SBL - device in SBL
* @MHI_EE_AMSS - device in mission mode (firmware fully loaded)
* @MHI_EE_RDDM - device in ram dump collection mode
* @MHI_EE_WFW - device in WLAN firmware mode
* @MHI_EE_PTHRU - device in PBL but configured in pass thru mode
* @MHI_EE_EDL - device in emergency download mode
*/
enum mhi_ee {
MHI_EE_PBL = 0x0,
MHI_EE_SBL = 0x1,
MHI_EE_AMSS = 0x2,
MHI_EE_RDDM = 0x3,
MHI_EE_WFW = 0x4,
MHI_EE_PTHRU = 0x5,
MHI_EE_EDL = 0x6,
MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
MHI_EE_MAX,
};
/**
* enum mhi_dev_state - device current MHI state
*/
enum mhi_dev_state {
MHI_STATE_RESET = 0x0,
MHI_STATE_READY = 0x1,
MHI_STATE_M0 = 0x2,
MHI_STATE_M1 = 0x3,
MHI_STATE_M2 = 0x4,
MHI_STATE_M3 = 0x5,
MHI_STATE_BHI = 0x7,
MHI_STATE_SYS_ERR = 0xFF,
MHI_STATE_MAX,
};
/**
* struct image_info - firmware and rddm table table
* @mhi_buf - Contain device firmware and rddm table
* @entries - # of entries in table
*/
struct image_info {
struct mhi_buf *mhi_buf;
struct bhi_vec_entry *bhi_vec;
u32 entries;
};
/**
* struct mhi_controller - Master controller structure for external modem
* @dev: Device associated with this controller
* @of_node: DT that has MHI configuration information
* @regs: Points to base of MHI MMIO register space
* @bhi: Points to base of MHI BHI register space
* @bhie: Points to base of MHI BHIe register space
* @wake_db: MHI WAKE doorbell register address
* @dev_id: PCIe device id of the external device
* @domain: PCIe domain the device connected to
* @bus: PCIe bus the device assigned to
* @slot: PCIe slot for the modem
* @iova_start: IOMMU starting address for data
* @iova_stop: IOMMU stop address for data
* @fw_image: Firmware image name for normal booting
* @edl_image: Firmware image name for emergency download mode
* @fbc_download: MHI host needs to do complete image transfer
* @rddm_size: RAM dump size that host should allocate for debugging purpose
* @sbl_size: SBL image size
* @seg_len: BHIe vector size
* @fbc_image: Points to firmware image buffer
* @rddm_image: Points to RAM dump buffer
* @max_chan: Maximum number of channels controller support
* @mhi_chan: Points to channel configuration table
* @lpm_chans: List of channels that require LPM notifications
* @total_ev_rings: Total # of event rings allocated
* @hw_ev_rings: Number of hardware event rings
* @sw_ev_rings: Number of software event rings
* @msi_required: Number of msi required to operate
* @msi_allocated: Number of msi allocated by bus master
* @irq: base irq # to request
* @mhi_event: MHI event ring configurations table
* @mhi_cmd: MHI command ring configurations table
* @mhi_ctxt: MHI device context, shared memory between host and device
* @timeout_ms: Timeout in ms for state transitions
* @pm_state: Power management state
* @ee: MHI device execution environment
* @dev_state: MHI STATE
* @status_cb: CB function to notify various power states to but master
* @link_status: Query link status in case of abnormal value read from device
* @runtime_get: Async runtime resume function
* @runtimet_put: Release votes
* @time_get: Return host time in us
* @lpm_disable: Request controller to disable link level low power modes
* @lpm_enable: Controller may enable link level low power modes again
* @priv_data: Points to bus master's private data
*/
struct mhi_controller {
struct list_head node;
struct mhi_device *mhi_dev;
/* device node for iommu ops */
struct device *dev;
struct device_node *of_node;
/* mmio base */
void __iomem *regs;
void __iomem *bhi;
void __iomem *bhie;
void __iomem *wake_db;
/* device topology */
u32 dev_id;
u32 domain;
u32 bus;
u32 slot;
/* addressing window */
dma_addr_t iova_start;
dma_addr_t iova_stop;
/* fw images */
const char *fw_image;
const char *edl_image;
/* mhi host manages downloading entire fbc images */
bool fbc_download;
size_t rddm_size;
size_t sbl_size;
size_t seg_len;
u32 session_id;
u32 sequence_id;
struct image_info *fbc_image;
struct image_info *rddm_image;
/* physical channel config data */
u32 max_chan;
struct mhi_chan *mhi_chan;
struct list_head lpm_chans; /* these chan require lpm notification */
/* physical event config data */
u32 total_ev_rings;
u32 hw_ev_rings;
u32 sw_ev_rings;
u32 msi_required;
u32 msi_allocated;
int *irq; /* interrupt table */
struct mhi_event *mhi_event;
/* cmd rings */
struct mhi_cmd *mhi_cmd;
/* mhi context (shared with device) */
struct mhi_ctxt *mhi_ctxt;
u32 timeout_ms;
/* caller should grab pm_mutex for suspend/resume operations */
struct mutex pm_mutex;
bool pre_init;
rwlock_t pm_lock;
u32 pm_state;
enum mhi_ee ee;
enum mhi_dev_state dev_state;
bool wake_set;
atomic_t dev_wake;
atomic_t alloc_size;
struct list_head transition_list;
spinlock_t transition_lock;
spinlock_t wlock;
/* debug counters */
u32 M0, M2, M3;
/* worker for different state transitions */
struct work_struct st_worker;
struct work_struct fw_worker;
struct work_struct syserr_worker;
wait_queue_head_t state_event;
/* shadow functions */
void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
enum MHI_CB reason);
int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
u64 (*time_get)(struct mhi_controller *mhi_cntrl, void *priv);
void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
int (*map_single)(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf);
void (*unmap_single)(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf);
/* channel to control DTR messaging */
struct mhi_device *dtr_dev;
/* bounce buffer settings */
bool bounce_buf;
size_t buffer_len;
/* supports time sync feature */
bool time_sync;
struct mhi_timesync *mhi_tsync;
struct mhi_device *tsync_dev;
/* kernel log level */
enum MHI_DEBUG_LEVEL klog_lvl;
/* private log level controller driver to set */
enum MHI_DEBUG_LEVEL log_lvl;
/* controller specific data */
void *priv_data;
void *log_buf;
struct dentry *dentry;
struct dentry *parent;
};
/**
* struct mhi_device - mhi device structure associated bind to channel
* @dev: Device associated with the channels
* @mtu: Maximum # of bytes controller support
* @ul_chan_id: MHI channel id for UL transfer
* @dl_chan_id: MHI channel id for DL transfer
* @tiocm: Device current terminal settings
* @priv: Driver private data
*/
struct mhi_device {
struct device dev;
u32 dev_id;
u32 domain;
u32 bus;
u32 slot;
size_t mtu;
int ul_chan_id;
int dl_chan_id;
int ul_event_id;
int dl_event_id;
u32 tiocm;
const struct mhi_device_id *id;
const char *chan_name;
struct mhi_controller *mhi_cntrl;
struct mhi_chan *ul_chan;
struct mhi_chan *dl_chan;
atomic_t dev_wake;
enum mhi_device_type dev_type;
void *priv_data;
int (*ul_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t len, enum MHI_FLAGS flags);
int (*dl_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
void *buf, size_t size, enum MHI_FLAGS flags);
void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB reason);
};
/**
* struct mhi_result - Completed buffer information
* @buf_addr: Address of data buffer
* @dir: Channel direction
* @bytes_xfer: # of bytes transferred
* @transaction_status: Status of last trasnferred
*/
struct mhi_result {
void *buf_addr;
enum dma_data_direction dir;
size_t bytes_xferd;
int transaction_status;
};
/**
* struct mhi_buf - Describes the buffer
* @buf: cpu address for the buffer
* @phys_addr: physical address of the buffer
* @dma_addr: iommu address for the buffer
* @len: # of bytes
* @name: Buffer label, for offload channel configurations name must be:
* ECA - Event context array data
* CCA - Channel context array data
*/
struct mhi_buf {
void *buf;
phys_addr_t phys_addr;
dma_addr_t dma_addr;
size_t len;
const char *name; /* ECA, CCA */
};
/**
* struct mhi_driver - mhi driver information
* @id_table: NULL terminated channel ID names
* @ul_xfer_cb: UL data transfer callback
* @dl_xfer_cb: DL data transfer callback
* @status_cb: Asynchronous status callback
*/
struct mhi_driver {
const struct mhi_device_id *id_table;
int (*probe)(struct mhi_device *mhi_dev,
const struct mhi_device_id *id);
void (*remove)(struct mhi_device *mhi_dev);
void (*ul_xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *res);
void (*dl_xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *res);
void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB mhi_cb);
struct device_driver driver;
};
#define to_mhi_driver(drv) container_of(drv, struct mhi_driver, driver)
#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
static inline void mhi_device_set_devdata(struct mhi_device *mhi_dev,
void *priv)
{
mhi_dev->priv_data = priv;
}
static inline void *mhi_device_get_devdata(struct mhi_device *mhi_dev)
{
return mhi_dev->priv_data;
}
/**
* mhi_queue_transfer - Queue a buffer to hardware
* All transfers are asyncronous transfers
* @mhi_dev: Device associated with the channels
* @dir: Data direction
* @buf: Data buffer (skb for hardware channels)
* @len: Size in bytes
* @mflags: Interrupt flags for the device
*/
static inline int mhi_queue_transfer(struct mhi_device *mhi_dev,
enum dma_data_direction dir,
void *buf,
size_t len,
enum MHI_FLAGS mflags)
{
if (dir == DMA_TO_DEVICE)
return mhi_dev->ul_xfer(mhi_dev, mhi_dev->ul_chan, buf, len,
mflags);
else
return mhi_dev->dl_xfer(mhi_dev, mhi_dev->dl_chan, buf, len,
mflags);
}
static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl)
{
return mhi_cntrl->priv_data;
}
static inline void mhi_free_controller(struct mhi_controller *mhi_cntrl)
{
kfree(mhi_cntrl);
}
/**
* mhi_driver_register - Register driver with MHI framework
* @mhi_drv: mhi_driver structure
*/
int mhi_driver_register(struct mhi_driver *mhi_drv);
/**
* mhi_driver_unregister - Unregister a driver for mhi_devices
* @mhi_drv: mhi_driver structure
*/
void mhi_driver_unregister(struct mhi_driver *mhi_drv);
/**
* mhi_device_configure - configure ECA or CCA context
* For offload channels that client manage, call this
* function to configure channel context or event context
* array associated with the channel
* @mhi_div: Device associated with the channels
* @dir: Direction of the channel
* @mhi_buf: Configuration data
* @elements: # of configuration elements
*/
int mhi_device_configure(struct mhi_device *mhi_div,
enum dma_data_direction dir,
struct mhi_buf *mhi_buf,
int elements);
/**
* mhi_device_get - disable all low power modes
* Only disables lpm, does not immediately exit low power mode
* if controller already in a low power mode
* @mhi_dev: Device associated with the channels
*/
void mhi_device_get(struct mhi_device *mhi_dev);
/**
* mhi_device_get_sync - disable all low power modes
* Synchronously disable all low power, exit low power mode if
* controller already in a low power state
* @mhi_dev: Device associated with the channels
*/
int mhi_device_get_sync(struct mhi_device *mhi_dev);
/**
* mhi_device_put - re-enable low power modes
* @mhi_dev: Device associated with the channels
*/
void mhi_device_put(struct mhi_device *mhi_dev);
/**
* mhi_prepare_for_transfer - setup channel for data transfer
* Moves both UL and DL channel from RESET to START state
* @mhi_dev: Device associated with the channels
*/
int mhi_prepare_for_transfer(struct mhi_device *mhi_dev);
/**
* mhi_unprepare_from_transfer -unprepare the channels
* Moves both UL and DL channels to RESET state
* @mhi_dev: Device associated with the channels
*/
void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev);
/**
* mhi_get_no_free_descriptors - Get transfer ring length
* Get # of TD available to queue buffers
* @mhi_dev: Device associated with the channels
* @dir: Direction of the channel
*/
int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev,
enum dma_data_direction dir);
/**
* mhi_poll - poll for any available data to consume
* This is only applicable for DL direction
* @mhi_dev: Device associated with the channels
* @budget: In descriptors to service before returning
*/
int mhi_poll(struct mhi_device *mhi_dev, u32 budget);
/**
* mhi_ioctl - user space IOCTL support for MHI channels
* Native support for setting TIOCM
* @mhi_dev: Device associated with the channels
* @cmd: IOCTL cmd
* @arg: Optional parameter, iotcl cmd specific
*/
long mhi_ioctl(struct mhi_device *mhi_dev, unsigned int cmd, unsigned long arg);
/**
* mhi_alloc_controller - Allocate mhi_controller structure
* Allocate controller structure and additional data for controller
* private data. You may get the private data pointer by calling
* mhi_controller_get_devdata
* @size: # of additional bytes to allocate
*/
struct mhi_controller *mhi_alloc_controller(size_t size);
/**
* of_register_mhi_controller - Register MHI controller
* Registers MHI controller with MHI bus framework. DT must be supported
* @mhi_cntrl: MHI controller to register
*/
int of_register_mhi_controller(struct mhi_controller *mhi_cntrl);
void mhi_unregister_mhi_controller(struct mhi_controller *mhi_cntrl);
/**
* mhi_bdf_to_controller - Look up a registered controller
* Search for controller based on device identification
* @domain: RC domain of the device
* @bus: Bus device connected to
* @slot: Slot device assigned to
* @dev_id: Device Identification
*/
struct mhi_controller *mhi_bdf_to_controller(u32 domain, u32 bus, u32 slot,
u32 dev_id);
/**
* mhi_prepare_for_power_up - Do pre-initialization before power up
* This is optional, call this before power up if controller do not
* want bus framework to automatically free any allocated memory during shutdown
* process.
* @mhi_cntrl: MHI controller
*/
int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl);
/**
* mhi_async_power_up - Starts MHI power up sequence
* @mhi_cntrl: MHI controller
*/
int mhi_async_power_up(struct mhi_controller *mhi_cntrl);
int mhi_sync_power_up(struct mhi_controller *mhi_cntrl);
/**
* mhi_power_down - Start MHI power down sequence
* @mhi_cntrl: MHI controller
* @graceful: link is still accessible, do a graceful shutdown process otherwise
* we will shutdown host w/o putting device into RESET state
*/
void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful);
/**
* mhi_unprepare_after_powre_down - free any allocated memory for power up
* @mhi_cntrl: MHI controller
*/
void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl);
/**
* mhi_pm_suspend - Move MHI into a suspended state
* Transition to MHI state M3 state from M0||M1||M2 state
* @mhi_cntrl: MHI controller
*/
int mhi_pm_suspend(struct mhi_controller *mhi_cntrl);
/**
* mhi_pm_resume - Resume MHI from suspended state
* Transition to MHI state M0 state from M3 state
* @mhi_cntrl: MHI controller
*/
int mhi_pm_resume(struct mhi_controller *mhi_cntrl);
/**
* mhi_download_rddm_img - Download ramdump image from device for
* debugging purpose.
* @mhi_cntrl: MHI controller
* @in_panic: If we trying to capture image while in kernel panic
*/
int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic);
/**
* mhi_force_rddm_mode - Force external device into rddm mode
* to collect device ramdump. This is useful if host driver assert
* and we need to see device state as well.
* @mhi_cntrl: MHI controller
*/
int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl);
#ifndef CONFIG_ARCH_QCOM
#ifdef CONFIG_MHI_DEBUG
#define MHI_VERB(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_VERBOSE) \
pr_dbg("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
} while (0)
#else
#define MHI_VERB(fmt, ...)
#endif
#define MHI_LOG(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_INFO) \
pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
} while (0)
#define MHI_ERR(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_ERROR) \
pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
} while (0)
#define MHI_CRITICAL(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_CRITICAL) \
pr_alert("[C][%s] " fmt, __func__, ##__VA_ARGS__); \
} while (0)
#else /* ARCH QCOM */
#include <linux/ipc_logging.h>
#ifdef CONFIG_MHI_DEBUG
#define MHI_VERB(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_VERBOSE) \
pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
if (mhi_cntrl->log_buf && \
(mhi_cntrl->log_lvl <= MHI_MSG_LVL_VERBOSE)) \
ipc_log_string(mhi_cntrl->log_buf, "[D][%s] " fmt, \
__func__, ##__VA_ARGS__); \
} while (0)
#else
#define MHI_VERB(fmt, ...)
#endif
#define MHI_LOG(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
pr_err("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
if (mhi_cntrl->log_buf && \
(mhi_cntrl->log_lvl <= MHI_MSG_LVL_INFO)) \
ipc_log_string(mhi_cntrl->log_buf, "[I][%s] " fmt, \
__func__, ##__VA_ARGS__); \
} while (0)
#define MHI_ERR(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_ERROR) \
pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
if (mhi_cntrl->log_buf && \
(mhi_cntrl->log_lvl <= MHI_MSG_LVL_ERROR)) \
ipc_log_string(mhi_cntrl->log_buf, "[E][%s] " fmt, \
__func__, ##__VA_ARGS__); \
} while (0)
#define MHI_CRITICAL(fmt, ...) do { \
if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_CRITICAL) \
pr_err("[C][%s] " fmt, __func__, ##__VA_ARGS__); \
if (mhi_cntrl->log_buf && \
(mhi_cntrl->log_lvl <= MHI_MSG_LVL_CRITICAL)) \
ipc_log_string(mhi_cntrl->log_buf, "[C][%s] " fmt, \
__func__, ##__VA_ARGS__); \
} while (0)
#endif
#endif /* _MHI_H_ */

View file

@ -761,4 +761,17 @@ struct typec_device_id {
kernel_ulong_t driver_data; kernel_ulong_t driver_data;
}; };
#define MHI_NAME_SIZE 32
/**
* struct mhi_device_id - MHI device identification
* @chan: MHI channel name
* @driver_data: driver data;
*/
struct mhi_device_id {
const char chan[MHI_NAME_SIZE];
kernel_ulong_t driver_data;
};
#endif /* LINUX_MOD_DEVICETABLE_H */ #endif /* LINUX_MOD_DEVICETABLE_H */