mmc: sdhci-msm-ice: Add Inline Crypto Engine (ICE) support

eMMC controller may have an Inline Crypto Engine (ICE) attached,
which can be used to encrypt/decrypt data going to/from eMMC.
This patch adds a new client driver sdhci-msm-ice.c which interacts
with ICE driver present in (drivers/crypto/msm/) and thus provides
an interface to the low-level SDHCI driver to do the data
encryption/decryption.

mmc: sdhci-msm: Add Inline Crypto Engine (ICE) support

Add ICE support to low-level driver sdhci-msm.c. This code is
primarily responsible for enabling ICE (if present),
managing ICE clocks, managing ICE suspend/resume and also provides
a few host->ops for sdhci driver to use ICE functionality.

mmc: sdhci: Add Inline Crypto Engine (ICE) support

This patch adds ICE support to sdhci driver. It uses the
new ICE host->ops like config/reset to configure/reset the
ICE HW as appropriate.

mmc: cqhci: Add Inline Crypto Engine (ICE) support

Add changes to configure ICE for data encryption/decryption
using CQE.

mmc: cqe: add new crypto_cfg_reset host operation

When encryption/decryption is enabled in CQ mode, the
legacy commands that are sent in HALT state will use
different slot other than slot 0 for crypto configuration
information. The slot that is selected depends on the last
slot that was used when it is in CQ mode.  This is causing
the data of legacy commands to be encrypted/decrypted based
on the wrong slot usage for crypto config details. Hence,
clear the crypto configuration of the slot used in CQ mode
whenever it gets completed.

mmc: sdhci-msm-ice: Add crypto register dump for debug upon error

Dump crypto related register information upon error for
debugging purpose.

crypto: ice: Make ICE init & reset API synchronous

ICE init & reset can be synchronous now because ICE does not need
to go to secure side for any ICE configuration. This would simplify
interface and make call more efficient.

crypto: ice: general driver clean-up

* Removed spinlock as it was not locking against anything
* Removed conversion of interrupt status to error number
  as it is not used by API client, and in case several bits are
  set only 1 error is ever handled and the rest get lost.
  Instead pass to the client the complete status.
* Removed redundant includes, variables
* vops structure is returned after performing a lookup in the DTS.
  There's no need for that as we already know the structure
  to return.
* Other minor corrections

mmc: sdhci-msm-ice: Update ice config vop to config_start

The config vop of the ice driver has been updated to config_start.
Updated the sdhci-ice driver to reflect this change.

mmc: sdhci-msm-ice: Enable ICE HCI if supported

Check if the SDHC has ICE HCI support. If support is present,
enable the cryptoghrapic support inside SDHC.

Also ensure that it is re-enabled after SDHC is reset.
By default ICE HCI is disabled.

mmc: sdhci-msm: Update ICE reset register offset for ICE HCI

SDHC v5.0 onwards the ICE reset register offset got updated.
Update the register offset based on the SDHC version.

mmc: sdhci-msm-ice: Factor out update config from sdhci_msm_ice_cfg

Factor out the logic of updating the SDHC ICE config registers
from sdhci_msm_ice_cfg().

For ICE3.0, different set of SDHC ICE registers are need to be updated.
So having this logic in separate functions, we can have logical
separation for ICE2.0 and ICE3.0.

mmc: sdhci-mmc-ice: Factor out ice_cfg_start from sdhci_msm_ice_cfg

Factor out the logic of getting ice config parameters from
sdhci_msm_ice_cfg().

With ICE2.0, same sdhci_msm_ice_cfg function is being called from cmdq
and noncq. But with ICE3.0 support, cmdq needs a separate host op.
Since this logic of getting ice config is common for noncq and cmdq,
by having it in separate function, same can be reused
in cmdq host op as-well.

mmc: sdhci-msm-ice: Add new sdhci host_op for updating ice config

Add new sdhci host_op for updating ice configuration while sending
request through cmdq. Adding provision for supporting the ice
context configuration for ICE HCI.

mmc: cmdq_hci: ice: Changes for supporting ICE HCI in CMDQ mode

On SDHC v5.0 onwards, SDHC includes the inline interface
for cryptographic operations which is ICE HCI.
This patch includes the driver changes for supporting crypto
operations with ICE HCI in cmdq mode.

Adding support for clearing ice configuration.
Once mmc request processing is completed, mmc driver has to
call config_end to ensure key information is cleared by ICE
driver. This call is optional for FDE but required for FBE.

mmc: sdhci-msm-ice: Changes for supporting ICE HCI in non CMDQ mode

SDHC v5.0 onwards, SDHC includes the inline interface for
cryptographic operations which is ICE HCI.

This patch includes the driver changes for supporting crypto
operations with ICE HCI in noncq mode.

mmc: host: sdhci: Add new host_op for clearing ice configuration

Add new host op for clearing ice configuration.
This config_end host op need to invoked for clearing ice configuration,
once mmc request processing is completed.

mmc: sdhci-msm-ice: add support for FBE over F2FS

Add support for FBE to work with F2FS filesystem on eMMC
based devices. For F2FS+FBE on eMMC, the cryto data unit
size (CDU size) should be 4KB as F2FS encrypts/decrypts
the data at min. 4KB blocks with (inode|pgidx) as it's
corresponding data unit number (DUN).

mmc: card: Set INLINECRYPT queue flag based on host capability

Set INLINECRYPT queue flag if the host can support h/w based inline
encryption.
This is needed to let the filesystem know that underlying storage
device can support inline encryption so that data encryption/
decryption would be handled at h/w level, not at filesystem.

Set inline-crypto support host flag if sdhc controller is capable
do performing inline encryption/decryption.

mmc: sdhci-msm: get the load notification from clock scaling

This is needed to scale up/down the ICE clock during runtime
as per the load on eMMC.

mmc: block: add req pointer to mmc request

This is needed by ICE (Inline Crypto Engine) driver to get
the ICE configuration data from the request.

Change-Id: Ie69c64f4dc0c31290dec50d905e8b3d436c86d62
Signed-off-by: Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
Signed-off-by: Sahitya Tummala <stummala@codeaurora.org>
Signed-off-by: Venkat Gopalakrishnan <venkatg@codeaurora.org>
Signed-off-by: Ram Prakash Gupta <rampraka@codeaurora.org>
This commit is contained in:
Sahitya Tummala 2015-05-21 08:28:19 +05:30 committed by Ram Prakash Gupta
parent 1b30d1daa8
commit 1a6164c473
13 changed files with 1172 additions and 2 deletions

View file

@ -1595,6 +1595,7 @@ static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req)
struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL);
mqrq->brq.mrq.req = req;
return mmc_blk_cqe_start_req(mq->card->host, &mqrq->brq.mrq);
}
@ -2199,6 +2200,7 @@ static int mmc_blk_mq_issue_rw_rq(struct mmc_queue *mq,
mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq);
mqrq->brq.mrq.done = mmc_blk_mq_req_done;
mqrq->brq.mrq.req = req;
mmc_pre_req(host, &mqrq->brq.mrq);

View file

@ -369,6 +369,8 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs);
blk_queue_max_segment_size(mq->queue, host->max_seg_size);
if (host->inlinecrypt_support)
queue_flag_set_unlocked(QUEUE_FLAG_INLINECRYPT, mq->queue);
INIT_WORK(&mq->recovery_work, mmc_mq_recovery_handler);
INIT_WORK(&mq->complete_work, mmc_blk_mq_complete_work);

View file

@ -149,6 +149,17 @@ config MMC_SDHCI_OF_AT91
help
This selects the Atmel SDMMC driver
config MMC_SDHCI_MSM_ICE
bool "Qualcomm Technologies, Inc Inline Crypto Engine for SDHCI core"
depends on MMC_SDHCI_MSM && CRYPTO_DEV_QCOM_ICE
help
This selects the QTI specific additions to support Inline Crypto
Engine (ICE). ICE accelerates the crypto operations and maintains
the high SDHCI performance.
Select this if you have ICE supported for SDHCI on QTI chipset.
If unsure, say N.
config MMC_SDHCI_OF_ESDHC
tristate "SDHCI OF support for the Freescale eSDHC controller"
depends on MMC_SDHCI_PLTFM

View file

@ -87,6 +87,7 @@ obj-$(CONFIG_MMC_SDHCI_OF_DWCMSHC) += sdhci-of-dwcmshc.o
obj-$(CONFIG_MMC_SDHCI_BCM_KONA) += sdhci-bcm-kona.o
obj-$(CONFIG_MMC_SDHCI_IPROC) += sdhci-iproc.o
obj-$(CONFIG_MMC_SDHCI_MSM) += sdhci-msm.o
obj-$(CONFIG_MMC_SDHCI_MSM_ICE) += sdhci-msm-ice.o
obj-$(CONFIG_MMC_SDHCI_ST) += sdhci-st.o
obj-$(CONFIG_MMC_SDHCI_MICROCHIP_PIC32) += sdhci-pic32.o
obj-$(CONFIG_MMC_SDHCI_BRCMSTB) += sdhci-brcmstb.o

View file

@ -241,6 +241,7 @@ static void __cqhci_enable(struct cqhci_host *cq_host)
{
struct mmc_host *mmc = cq_host->mmc;
u32 cqcfg;
u32 cqcap = 0;
cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
@ -258,6 +259,18 @@ static void __cqhci_enable(struct cqhci_host *cq_host)
if (cq_host->caps & CQHCI_TASK_DESC_SZ_128)
cqcfg |= CQHCI_TASK_DESC_SZ;
cqcap = cqhci_readl(cq_host, CQHCI_CAP);
if (cqcap & CQHCI_CAP_CS) {
/*
* In case host controller supports cryptographic operations
* then, it uses 128bit task descriptor. Upper 64 bits of task
* descriptor would be used to pass crypto specific informaton.
*/
cq_host->caps |= CQHCI_CAP_CRYPTO_SUPPORT |
CQHCI_TASK_DESC_SZ_128;
cqcfg |= CQHCI_ICE_ENABLE;
}
cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
cqcfg |= CQHCI_ENABLE;
@ -554,6 +567,30 @@ static inline int cqhci_tag(struct mmc_request *mrq)
return mrq->cmd ? DCMD_SLOT : mrq->tag;
}
static inline
void cqe_prep_crypto_desc(struct cqhci_host *cq_host, u64 *task_desc,
u64 ice_ctx)
{
u64 *ice_desc = NULL;
if (cq_host->caps & CQHCI_CAP_CRYPTO_SUPPORT) {
/*
* Get the address of ice context for the given task descriptor.
* ice context is present in the upper 64bits of task descriptor
* ice_conext_base_address = task_desc + 8-bytes
*/
ice_desc = (__le64 __force *)((u8 *)task_desc +
CQHCI_TASK_DESC_TASK_PARAMS_SIZE);
memset(ice_desc, 0, CQHCI_TASK_DESC_ICE_PARAMS_SIZE);
/*
* Assign upper 64bits data of task descritor with ice context
*/
if (ice_ctx)
*ice_desc = cpu_to_le64(ice_ctx);
}
}
static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
{
int err = 0;
@ -562,6 +599,7 @@ static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
int tag = cqhci_tag(mrq);
struct cqhci_host *cq_host = mmc->cqe_private;
unsigned long flags;
u64 ice_ctx = 0;
if (!cq_host->enabled) {
pr_err("%s: cqhci: not enabled\n", mmc_hostname(mmc));
@ -585,9 +623,19 @@ static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
}
if (mrq->data) {
if (cq_host->ops->crypto_cfg) {
err = cq_host->ops->crypto_cfg(mmc, mrq, tag, &ice_ctx);
if (err) {
pr_err("%s: failed to configure crypto: err %d tag %d\n",
mmc_hostname(mmc), err, tag);
goto out;
}
}
task_desc = (__le64 __force *)get_desc(cq_host, tag);
cqhci_prep_task_desc(mrq, &data, 1);
*task_desc = cpu_to_le64(data);
cqe_prep_crypto_desc(cq_host, task_desc, ice_ctx);
err = cqhci_prep_tran_desc(mrq, cq_host, tag);
if (err) {
pr_err("%s: cqhci: failed to setup tx desc: %d\n",
@ -619,7 +667,7 @@ out_unlock:
if (err)
cqhci_post_req(mmc, mrq);
out:
return err;
}
@ -720,6 +768,7 @@ static void cqhci_finish_mrq(struct mmc_host *mmc, unsigned int tag)
struct cqhci_slot *slot = &cq_host->slot[tag];
struct mmc_request *mrq = slot->mrq;
struct mmc_data *data;
int err = 0;
if (!mrq) {
WARN_ONCE(1, "%s: cqhci: spurious TCN for tag %d\n",
@ -739,12 +788,22 @@ static void cqhci_finish_mrq(struct mmc_host *mmc, unsigned int tag)
data = mrq->data;
if (data) {
if (cq_host->ops->crypto_cfg_end) {
err = cq_host->ops->crypto_cfg_end(mmc, mrq);
if (err) {
pr_err("%s: failed to end ice config: err %d tag %d\n",
mmc_hostname(mmc), err, tag);
}
}
if (data->error)
data->bytes_xfered = 0;
else
data->bytes_xfered = data->blksz * data->blocks;
}
if (!(cq_host->caps & CQHCI_CAP_CRYPTO_SUPPORT) &&
cq_host->ops->crypto_cfg_reset)
cq_host->ops->crypto_cfg_reset(mmc, tag);
mmc_cqe_request_done(mmc, mrq);
}

View file

@ -30,11 +30,13 @@
/* capabilities */
#define CQHCI_CAP 0x04
#define CQHCI_CAP_CS (1 << 28)
/* configuration */
#define CQHCI_CFG 0x08
#define CQHCI_DCMD 0x00001000
#define CQHCI_TASK_DESC_SZ 0x00000100
#define CQHCI_ENABLE 0x00000001
#define CQHCI_ICE_ENABLE 0x00000002
/* control */
#define CQHCI_CTL 0x0C
@ -145,6 +147,9 @@
#define CQHCI_DAT_ADDR_LO(x) (((x) & 0xFFFFFFFF) << 32)
#define CQHCI_DAT_ADDR_HI(x) (((x) & 0xFFFFFFFF) << 0)
#define CQHCI_TASK_DESC_TASK_PARAMS_SIZE 8
#define CQHCI_TASK_DESC_ICE_PARAMS_SIZE 8
struct cqhci_host_ops;
struct mmc_host;
struct cqhci_slot;
@ -167,6 +172,7 @@ struct cqhci_host {
u32 dcmd_slot;
u32 caps;
#define CQHCI_TASK_DESC_SZ_128 0x1
#define CQHCI_CAP_CRYPTO_SUPPORT 0x2
u32 quirks;
#define CQHCI_QUIRK_SHORT_TXFR_DESC_SZ 0x1
@ -210,6 +216,10 @@ struct cqhci_host_ops {
u32 (*read_l)(struct cqhci_host *host, int reg);
void (*enable)(struct mmc_host *mmc);
void (*disable)(struct mmc_host *mmc, bool recovery);
int (*crypto_cfg)(struct mmc_host *mmc, struct mmc_request *mrq,
u32 slot, u64 *ice_ctx);
int (*crypto_cfg_end)(struct mmc_host *mmc, struct mmc_request *mrq);
void (*crypto_cfg_reset)(struct mmc_host *mmc, unsigned int slot);
};
static inline void cqhci_writel(struct cqhci_host *host, u32 val, int reg)

View file

@ -0,0 +1,572 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2015, 2017-2019, The Linux Foundation. All rights reserved.
*/
#include "sdhci-msm-ice.h"
static void sdhci_msm_ice_error_cb(void *host_ctrl, u32 error)
{
struct sdhci_msm_host *msm_host = (struct sdhci_msm_host *)host_ctrl;
dev_err(&msm_host->pdev->dev, "%s: Error in ice operation 0x%x\n",
__func__, error);
if (msm_host->ice.state == SDHCI_MSM_ICE_STATE_ACTIVE)
msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
}
static struct platform_device *sdhci_msm_ice_get_pdevice(struct device *dev)
{
struct device_node *node;
struct platform_device *ice_pdev = NULL;
node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
if (!node) {
dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
__func__);
goto out;
}
ice_pdev = qcom_ice_get_pdevice(node);
out:
return ice_pdev;
}
static
struct qcom_ice_variant_ops *sdhci_msm_ice_get_vops(struct device *dev)
{
struct qcom_ice_variant_ops *ice_vops = NULL;
struct device_node *node;
node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
if (!node) {
dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
__func__);
goto out;
}
ice_vops = qcom_ice_get_variant_ops(node);
of_node_put(node);
out:
return ice_vops;
}
static
void sdhci_msm_enable_ice_hci(struct sdhci_host *host, bool enable)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
u32 config = 0;
u32 ice_cap = 0;
/*
* Enable the cryptographic support inside SDHC.
* This is a global config which needs to be enabled
* all the time.
* Only when it it is enabled, the ICE_HCI capability
* will get reflected in CQCAP register.
*/
config = readl_relaxed(host->ioaddr + HC_VENDOR_SPECIFIC_FUNC4);
if (enable)
config &= ~DISABLE_CRYPTO;
else
config |= DISABLE_CRYPTO;
writel_relaxed(config, host->ioaddr + HC_VENDOR_SPECIFIC_FUNC4);
/*
* CQCAP register is in different register space from above
* ice global enable register. So a mb() is required to ensure
* above write gets completed before reading the CQCAP register.
*/
mb();
/*
* Check if ICE HCI capability support is present
* If present, enable it.
*/
ice_cap = readl_relaxed(msm_host->cryptoio + ICE_CQ_CAPABILITIES);
if (ice_cap & ICE_HCI_SUPPORT) {
config = readl_relaxed(msm_host->cryptoio + ICE_CQ_CONFIG);
if (enable)
config |= CRYPTO_GENERAL_ENABLE;
else
config &= ~CRYPTO_GENERAL_ENABLE;
writel_relaxed(config, msm_host->cryptoio + ICE_CQ_CONFIG);
}
}
int sdhci_msm_ice_get_dev(struct sdhci_host *host)
{
struct device *sdhc_dev;
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (!msm_host || !msm_host->pdev) {
pr_err("%s: invalid msm_host %p or msm_host->pdev\n",
__func__, msm_host);
return -EINVAL;
}
sdhc_dev = &msm_host->pdev->dev;
msm_host->ice.vops = sdhci_msm_ice_get_vops(sdhc_dev);
msm_host->ice.pdev = sdhci_msm_ice_get_pdevice(sdhc_dev);
if (msm_host->ice.pdev == ERR_PTR(-EPROBE_DEFER)) {
dev_err(sdhc_dev, "%s: ICE device not probed yet\n",
__func__);
msm_host->ice.pdev = NULL;
msm_host->ice.vops = NULL;
return -EPROBE_DEFER;
}
if (!msm_host->ice.pdev) {
dev_dbg(sdhc_dev, "%s: invalid platform device\n", __func__);
msm_host->ice.vops = NULL;
return -ENODEV;
}
if (!msm_host->ice.vops) {
dev_dbg(sdhc_dev, "%s: invalid ice vops\n", __func__);
msm_host->ice.pdev = NULL;
return -ENODEV;
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
return 0;
}
static
int sdhci_msm_ice_pltfm_init(struct sdhci_msm_host *msm_host)
{
struct resource *ice_memres = NULL;
struct platform_device *pdev = msm_host->pdev;
int err = 0;
if (!msm_host->ice_hci_support)
goto out;
/*
* ICE HCI registers are present in cmdq register space.
* So map the cmdq mem for accessing ICE HCI registers.
*/
ice_memres = platform_get_resource_byname(pdev,
IORESOURCE_MEM, "cqhci_mem");
if (!ice_memres) {
dev_err(&pdev->dev, "Failed to get iomem resource for ice\n");
err = -EINVAL;
goto out;
}
msm_host->cryptoio = devm_ioremap(&pdev->dev,
ice_memres->start,
resource_size(ice_memres));
if (!msm_host->cryptoio) {
dev_err(&pdev->dev, "Failed to remap registers\n");
err = -ENOMEM;
}
out:
return err;
}
int sdhci_msm_ice_init(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.vops->init) {
err = sdhci_msm_ice_pltfm_init(msm_host);
if (err)
goto out;
if (msm_host->ice_hci_support)
sdhci_msm_enable_ice_hci(host, true);
err = msm_host->ice.vops->init(msm_host->ice.pdev,
msm_host,
sdhci_msm_ice_error_cb);
if (err) {
pr_err("%s: ice init err %d\n",
mmc_hostname(host->mmc), err);
sdhci_msm_ice_print_regs(host);
if (msm_host->ice_hci_support)
sdhci_msm_enable_ice_hci(host, false);
goto out;
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
}
out:
return err;
}
void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
{
writel_relaxed(SDHCI_MSM_ICE_ENABLE_BYPASS,
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
}
static
int sdhci_msm_ice_get_cfg(struct sdhci_msm_host *msm_host, struct request *req,
unsigned int *bypass, short *key_index)
{
int err = 0;
struct ice_data_setting ice_set;
memset(&ice_set, 0, sizeof(struct ice_data_setting));
if (msm_host->ice.vops->config_start) {
err = msm_host->ice.vops->config_start(
msm_host->ice.pdev,
req, &ice_set, false);
if (err) {
pr_err("%s: ice config failed %d\n",
mmc_hostname(msm_host->mmc), err);
return err;
}
}
/* if writing data command */
if (rq_data_dir(req) == WRITE)
*bypass = ice_set.encr_bypass ?
SDHCI_MSM_ICE_ENABLE_BYPASS :
SDHCI_MSM_ICE_DISABLE_BYPASS;
/* if reading data command */
else if (rq_data_dir(req) == READ)
*bypass = ice_set.decr_bypass ?
SDHCI_MSM_ICE_ENABLE_BYPASS :
SDHCI_MSM_ICE_DISABLE_BYPASS;
*key_index = ice_set.crypto_data.key_index;
return err;
}
static
void sdhci_msm_ice_update_cfg(struct sdhci_host *host, u64 lba, u32 slot,
unsigned int bypass, short key_index, u32 cdu_sz)
{
unsigned int ctrl_info_val = 0;
/* Configure ICE index */
ctrl_info_val =
(key_index &
MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX;
/* Configure data unit size of transfer request */
ctrl_info_val |=
(cdu_sz &
MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU;
/* Configure ICE bypass mode */
ctrl_info_val |=
(bypass & MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS;
writel_relaxed((lba & 0xFFFFFFFF),
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n + 16 * slot);
writel_relaxed(((lba >> 32) & 0xFFFFFFFF),
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n + 16 * slot);
writel_relaxed(ctrl_info_val,
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
/* Ensure ICE registers are configured before issuing SDHCI request */
mb();
}
static inline
void sdhci_msm_ice_hci_update_cqe_cfg(u64 dun, unsigned int bypass,
short key_index, u64 *ice_ctx)
{
/*
*
* registers fields. Below is the equivalent names for
* ICE3.0 Vs ICE2.0:
* Data Unit Number(DUN) == Logical Base address(LBA)
* Crypto Configuration index (CCI) == Key Index
* Crypto Enable (CE) == !BYPASS
*/
if (ice_ctx)
*ice_ctx = DATA_UNIT_NUM(dun) |
CRYPTO_CONFIG_INDEX(key_index) |
CRYPTO_ENABLE(!bypass);
}
static
void sdhci_msm_ice_hci_update_noncq_cfg(struct sdhci_host *host,
u64 dun, unsigned int bypass, short key_index)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
unsigned int crypto_params = 0;
/*
* The naming convention got changed between ICE2.0 and ICE3.0
* registers fields. Below is the equivalent names for
* ICE3.0 Vs ICE2.0:
* Data Unit Number(DUN) == Logical Base address(LBA)
* Crypto Configuration index (CCI) == Key Index
* Crypto Enable (CE) == !BYPASS
*/
/* Configure ICE bypass mode */
crypto_params |=
((!bypass) & MASK_SDHCI_MSM_ICE_HCI_PARAM_CE)
<< OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CE;
/* Configure Crypto Configure Index (CCI) */
crypto_params |= (key_index &
MASK_SDHCI_MSM_ICE_HCI_PARAM_CCI)
<< OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CCI;
writel_relaxed((crypto_params & 0xFFFFFFFF),
msm_host->cryptoio + ICE_NONCQ_CRYPTO_PARAMS);
/* Update DUN */
writel_relaxed((dun & 0xFFFFFFFF),
msm_host->cryptoio + ICE_NONCQ_CRYPTO_DUN);
/* Ensure ICE registers are configured before issuing SDHCI request */
mb();
}
int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
u32 slot)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
short key_index = 0;
u64 dun = 0;
unsigned int bypass = SDHCI_MSM_ICE_ENABLE_BYPASS;
u32 cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_512_B;
struct request *req;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
WARN_ON(!mrq);
if (!mrq)
return -EINVAL;
req = mrq->req;
if (req && req->bio) {
if (bio_dun(req->bio)) {
dun = bio_dun(req->bio);
cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB;
} else {
dun = req->__sector;
}
err = sdhci_msm_ice_get_cfg(msm_host, req, &bypass, &key_index);
if (err)
return err;
pr_debug("%s: %s: slot %d bypass %d key_index %d\n",
mmc_hostname(host->mmc),
(rq_data_dir(req) == WRITE) ? "WRITE" : "READ",
slot, bypass, key_index);
}
if (msm_host->ice_hci_support) {
/* For ICE HCI / ICE3.0 */
sdhci_msm_ice_hci_update_noncq_cfg(host, dun, bypass,
key_index);
} else {
/* For ICE versions earlier to ICE3.0 */
sdhci_msm_ice_update_cfg(host, dun, slot, bypass, key_index,
cdu_sz);
}
return 0;
}
int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
short key_index = 0;
u64 dun = 0;
unsigned int bypass = SDHCI_MSM_ICE_ENABLE_BYPASS;
struct request *req;
u32 cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_512_B;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
WARN_ON(!mrq);
if (!mrq)
return -EINVAL;
req = mrq->req;
if (req && req->bio) {
if (bio_dun(req->bio)) {
dun = bio_dun(req->bio);
cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB;
} else {
dun = req->__sector;
}
err = sdhci_msm_ice_get_cfg(msm_host, req, &bypass, &key_index);
if (err)
return err;
pr_debug("%s: %s: slot %d bypass %d key_index %d\n",
mmc_hostname(host->mmc),
(rq_data_dir(req) == WRITE) ? "WRITE" : "READ",
slot, bypass, key_index);
}
if (msm_host->ice_hci_support) {
/* For ICE HCI / ICE3.0 */
sdhci_msm_ice_hci_update_cqe_cfg(dun, bypass, key_index,
ice_ctx);
} else {
/* For ICE versions earlier to ICE3.0 */
sdhci_msm_ice_update_cfg(host, dun, slot, bypass, key_index,
cdu_sz);
}
return 0;
}
int sdhci_msm_ice_cfg_end(struct sdhci_host *host, struct mmc_request *mrq)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
struct request *req;
if (!host->is_crypto_en)
return 0;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
req = mrq->req;
if (req) {
if (msm_host->ice.vops->config_end) {
err = msm_host->ice.vops->config_end(req);
if (err) {
pr_err("%s: ice config end failed %d\n",
mmc_hostname(host->mmc), err);
return err;
}
}
}
return 0;
}
int sdhci_msm_ice_reset(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state before reset %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->reset) {
err = msm_host->ice.vops->reset(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice reset failed %d\n",
mmc_hostname(host->mmc), err);
sdhci_msm_ice_print_regs(host);
return err;
}
}
/* If ICE HCI support is present then re-enable it */
if (msm_host->ice_hci_support)
sdhci_msm_enable_ice_hci(host, true);
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state after reset %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
return 0;
}
int sdhci_msm_ice_resume(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state !=
SDHCI_MSM_ICE_STATE_SUSPENDED) {
pr_err("%s: ice is in invalid state before resume %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->resume) {
err = msm_host->ice.vops->resume(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice resume failed %d\n",
mmc_hostname(host->mmc), err);
return err;
}
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
return 0;
}
int sdhci_msm_ice_suspend(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state !=
SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state before resume %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->suspend) {
err = msm_host->ice.vops->suspend(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice suspend failed %d\n",
mmc_hostname(host->mmc), err);
return -EINVAL;
}
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_SUSPENDED;
return 0;
}
int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int stat = -EINVAL;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->status) {
*ice_status = 0;
stat = msm_host->ice.vops->status(msm_host->ice.pdev);
if (stat < 0) {
pr_err("%s: ice get sts failed %d\n",
mmc_hostname(host->mmc), stat);
return -EINVAL;
}
*ice_status = stat;
}
return 0;
}
void sdhci_msm_ice_print_regs(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (msm_host->ice.vops->debug)
msm_host->ice.vops->debug(msm_host->ice.pdev);
}

View file

@ -0,0 +1,164 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2015, 2017, 2019, The Linux Foundation. All rights reserved.
*/
#ifndef __SDHCI_MSM_ICE_H__
#define __SDHCI_MSM_ICE_H__
#include <linux/io.h>
#include <linux/of.h>
#include <linux/blkdev.h>
#include <crypto/ice.h>
#include "sdhci-msm.h"
#define SDHC_MSM_CRYPTO_LABEL "sdhc-msm-crypto"
/* Timeout waiting for ICE initialization, that requires TZ access */
#define SDHCI_MSM_ICE_COMPLETION_TIMEOUT_MS 500
/*
* SDHCI host controller ICE registers. There are n [0..31]
* of each of these registers
*/
#define NUM_SDHCI_MSM_ICE_CTRL_INFO_n_REGS 32
#define CORE_VENDOR_SPEC_ICE_CTRL 0x300
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n 0x304
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n 0x308
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n 0x30C
/* ICE3.0 register which got added cmdq reg space */
#define ICE_CQ_CAPABILITIES 0x04
#define ICE_HCI_SUPPORT (1 << 28)
#define ICE_CQ_CONFIG 0x08
#define CRYPTO_GENERAL_ENABLE (1 << 1)
#define ICE_NONCQ_CRYPTO_PARAMS 0x70
#define ICE_NONCQ_CRYPTO_DUN 0x74
/* ICE3.0 register which got added hc reg space */
#define HC_VENDOR_SPECIFIC_FUNC4 0x260
#define DISABLE_CRYPTO (1 << 15)
#define HC_VENDOR_SPECIFIC_ICE_CTRL 0x800
#define ICE_SW_RST_EN (1 << 0)
/* SDHCI MSM ICE CTRL Info register offset */
enum {
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS = 0,
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX = 1,
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU = 6,
OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CCI = 0,
OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CE = 8,
};
/* SDHCI MSM ICE CTRL Info register masks */
enum {
MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS = 0x1,
MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX = 0x1F,
MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU = 0x7,
MASK_SDHCI_MSM_ICE_HCI_PARAM_CE = 0x1,
MASK_SDHCI_MSM_ICE_HCI_PARAM_CCI = 0xff
};
/* SDHCI MSM ICE encryption/decryption bypass state */
enum {
SDHCI_MSM_ICE_DISABLE_BYPASS = 0,
SDHCI_MSM_ICE_ENABLE_BYPASS = 1,
};
/* SDHCI MSM ICE Crypto Data Unit of target DUN of Transfer Request */
enum {
SDHCI_MSM_ICE_TR_DATA_UNIT_512_B = 0,
SDHCI_MSM_ICE_TR_DATA_UNIT_1_KB = 1,
SDHCI_MSM_ICE_TR_DATA_UNIT_2_KB = 2,
SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB = 3,
SDHCI_MSM_ICE_TR_DATA_UNIT_8_KB = 4,
SDHCI_MSM_ICE_TR_DATA_UNIT_16_KB = 5,
SDHCI_MSM_ICE_TR_DATA_UNIT_32_KB = 6,
SDHCI_MSM_ICE_TR_DATA_UNIT_64_KB = 7,
};
/* SDHCI MSM ICE internal state */
enum {
SDHCI_MSM_ICE_STATE_DISABLED = 0,
SDHCI_MSM_ICE_STATE_ACTIVE = 1,
SDHCI_MSM_ICE_STATE_SUSPENDED = 2,
};
/* crypto context fields in cmdq data command task descriptor */
#define DATA_UNIT_NUM(x) (((u64)(x) & 0xFFFFFFFF) << 0)
#define CRYPTO_CONFIG_INDEX(x) (((u64)(x) & 0xFF) << 32)
#define CRYPTO_ENABLE(x) (((u64)(x) & 0x1) << 47)
#ifdef CONFIG_MMC_SDHCI_MSM_ICE
int sdhci_msm_ice_get_dev(struct sdhci_host *host);
int sdhci_msm_ice_init(struct sdhci_host *host);
void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot);
int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
u32 slot);
int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx);
int sdhci_msm_ice_cfg_end(struct sdhci_host *host, struct mmc_request *mrq);
int sdhci_msm_ice_reset(struct sdhci_host *host);
int sdhci_msm_ice_resume(struct sdhci_host *host);
int sdhci_msm_ice_suspend(struct sdhci_host *host);
int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status);
void sdhci_msm_ice_print_regs(struct sdhci_host *host);
#else
inline int sdhci_msm_ice_get_dev(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (msm_host) {
msm_host->ice.pdev = NULL;
msm_host->ice.vops = NULL;
}
return -ENODEV;
}
inline int sdhci_msm_ice_init(struct sdhci_host *host)
{
return 0;
}
inline void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
{
}
inline int sdhci_msm_ice_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot)
{
return 0;
}
inline int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
{
return 0;
}
inline int sdhci_msm_ice_cfg_end(struct sdhci_host *host,
struct mmc_request *mrq)
{
return 0;
}
inline int sdhci_msm_ice_reset(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_resume(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_suspend(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_get_status(struct sdhci_host *host,
int *ice_status)
{
return 0;
}
inline void sdhci_msm_ice_print_regs(struct sdhci_host *host)
{
}
#endif /* CONFIG_MMC_SDHCI_MSM_ICE */
#endif /* __SDHCI_MSM_ICE_H__ */

View file

@ -33,6 +33,7 @@
#include <trace/events/mmc.h>
#include "sdhci-msm.h"
#include "sdhci-msm-ice.h"
#include "sdhci-pltfm.h"
#include "cqhci.h"
@ -1983,6 +1984,8 @@ struct sdhci_msm_pltfm_data *sdhci_msm_populate_pdata(struct device *dev,
int len, i;
int clk_table_len;
u32 *clk_table = NULL;
int ice_clk_table_len;
u32 *ice_clk_table = NULL;
enum of_gpio_flags flags = OF_GPIO_ACTIVE_LOW;
int bus_clk_table_len;
u32 *bus_clk_table = NULL;
@ -2025,6 +2028,28 @@ struct sdhci_msm_pltfm_data *sdhci_msm_populate_pdata(struct device *dev,
}
}
if (msm_host->ice.pdev) {
if (sdhci_msm_dt_get_array(dev, "qcom,ice-clk-rates",
&ice_clk_table, &ice_clk_table_len, 0)) {
dev_err(dev, "failed parsing supported ice clock rates\n");
goto out;
}
if (!ice_clk_table || !ice_clk_table_len) {
dev_err(dev, "Invalid clock table\n");
goto out;
}
if (ice_clk_table_len != 2) {
dev_err(dev, "Need max and min frequencies in the table\n");
goto out;
}
pdata->sup_ice_clk_table = ice_clk_table;
pdata->sup_ice_clk_cnt = ice_clk_table_len;
pdata->ice_clk_max = pdata->sup_ice_clk_table[0];
pdata->ice_clk_min = pdata->sup_ice_clk_table[1];
dev_dbg(dev, "supported ICE clock rates (Hz): max: %u min: %u\n",
pdata->ice_clk_max, pdata->ice_clk_min);
}
pdata->vreg_data = devm_kzalloc(dev, sizeof(struct
sdhci_msm_slot_reg_data),
GFP_KERNEL);
@ -2251,9 +2276,69 @@ void sdhci_msm_cqe_disable(struct mmc_host *mmc, bool recovery)
sdhci_cqe_disable(mmc, recovery);
}
int sdhci_msm_cqe_crypto_cfg(struct mmc_host *mmc,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
{
int err = 0;
struct sdhci_host *host = mmc_priv(mmc);
if (!host->is_crypto_en)
return 0;
if (host->crypto_reset_reqd && host->ops->crypto_engine_reset) {
err = host->ops->crypto_engine_reset(host);
if (err) {
pr_err("%s: crypto reset failed\n",
mmc_hostname(host->mmc));
goto out;
}
host->crypto_reset_reqd = false;
}
err = sdhci_msm_ice_cqe_cfg(host, mrq, slot, ice_ctx);
if (err) {
pr_err("%s: failed to configure crypto\n",
mmc_hostname(host->mmc));
goto out;
}
out:
return err;
}
void sdhci_msm_cqe_crypto_cfg_reset(struct mmc_host *mmc, unsigned int slot)
{
struct sdhci_host *host = mmc_priv(mmc);
if (!host->is_crypto_en)
return;
return sdhci_msm_ice_cfg_reset(host, slot);
}
int sdhci_msm_cqe_crypto_cfg_end(struct mmc_host *mmc,
struct mmc_request *mrq)
{
int err = 0;
struct sdhci_host *host = mmc_priv(mmc);
if (!host->is_crypto_en)
return 0;
err = sdhci_msm_ice_cfg_end(host, mrq);
if (err) {
pr_err("%s: failed to configure crypto\n",
mmc_hostname(host->mmc));
return err;
}
return 0;
}
static const struct cqhci_host_ops sdhci_msm_cqhci_ops = {
.enable = sdhci_msm_cqe_enable,
.disable = sdhci_msm_cqe_disable,
.crypto_cfg = sdhci_msm_cqe_crypto_cfg,
.crypto_cfg_reset = sdhci_msm_cqe_crypto_cfg_reset,
.crypto_cfg_end = sdhci_msm_cqe_crypto_cfg_end,
};
#ifdef CONFIG_MMC_CQHCI
@ -3330,6 +3415,14 @@ static int sdhci_msm_enable_controller_clock(struct sdhci_host *host)
goto disable_bus_aggr_clk;
}
if (!IS_ERR(msm_host->ice_clk)) {
rc = clk_prepare_enable(msm_host->ice_clk);
if (rc) {
pr_err("%s: %s: failed to enable the ice-clk with error %d\n",
mmc_hostname(host->mmc), __func__, rc);
goto disable_host_clk;
}
}
atomic_set(&msm_host->controller_clock, 1);
pr_debug("%s: %s: enabled controller clock\n",
mmc_hostname(host->mmc), __func__);
@ -3339,6 +3432,9 @@ static int sdhci_msm_enable_controller_clock(struct sdhci_host *host)
disable_bus_aggr_clk:
if (!IS_ERR(msm_host->bus_aggr_clk))
clk_disable_unprepare(msm_host->bus_aggr_clk);
disable_host_clk:
if (!IS_ERR(msm_host->clk))
clk_disable_unprepare(msm_host->clk);
disable_pclk:
if (!IS_ERR(msm_host->pclk))
clk_disable_unprepare(msm_host->pclk);
@ -3448,6 +3544,8 @@ static int sdhci_msm_prepare_clocks(struct sdhci_host *host, bool enable)
clk_disable_unprepare(msm_host->sleep_clk);
if (!IS_ERR_OR_NULL(msm_host->ff_clk))
clk_disable_unprepare(msm_host->ff_clk);
if (!IS_ERR(msm_host->ice_clk))
clk_disable_unprepare(msm_host->ice_clk);
if (!IS_ERR_OR_NULL(msm_host->bus_clk))
clk_disable_unprepare(msm_host->bus_clk);
sdhci_msm_disable_controller_clock(host);
@ -3465,6 +3563,8 @@ disable_controller_clk:
clk_disable_unprepare(msm_host->clk);
if (!IS_ERR_OR_NULL(msm_host->bus_aggr_clk))
clk_disable_unprepare(msm_host->bus_aggr_clk);
if (!IS_ERR(msm_host->ice_clk))
clk_disable_unprepare(msm_host->ice_clk);
if (!IS_ERR_OR_NULL(msm_host->pclk))
clk_disable_unprepare(msm_host->pclk);
atomic_set(&msm_host->controller_clock, 0);
@ -3786,6 +3886,7 @@ void sdhci_msm_dump_vendor_regs(struct sdhci_host *host)
int i, index = 0;
u32 test_bus_val = 0;
u32 debug_reg[MAX_TEST_BUS] = {0};
u32 sts = 0;
sdhci_msm_cache_debug_data(host);
pr_info("----------- VENDOR REGISTER DUMP -----------\n");
@ -3851,10 +3952,29 @@ void sdhci_msm_dump_vendor_regs(struct sdhci_host *host)
pr_info(" Test bus[%d to %d]: 0x%08x 0x%08x 0x%08x 0x%08x\n",
i, i + 3, debug_reg[i], debug_reg[i+1],
debug_reg[i+2], debug_reg[i+3]);
if (host->is_crypto_en) {
sdhci_msm_ice_get_status(host, &sts);
pr_info("%s: ICE status %x\n", mmc_hostname(host->mmc), sts);
sdhci_msm_ice_print_regs(host);
}
}
static void sdhci_msm_reset(struct sdhci_host *host, u8 mask)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
/* Set ICE core to be reset in sync with SDHC core */
if (msm_host->ice.pdev) {
if (msm_host->ice_hci_support)
writel_relaxed(1, host->ioaddr +
HC_VENDOR_SPECIFIC_ICE_CTRL);
else
writel_relaxed(1,
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL);
}
sdhci_reset(host, mask);
}
@ -4469,10 +4589,34 @@ static unsigned int sdhci_msm_get_current_limit(struct sdhci_host *host)
static int sdhci_msm_notify_load(struct sdhci_host *host, enum mmc_load state)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int ret = 0;
u32 clk_rate = 0;
if (!IS_ERR(msm_host->ice_clk)) {
clk_rate = (state == MMC_LOAD_LOW) ?
msm_host->pdata->ice_clk_min :
msm_host->pdata->ice_clk_max;
if (msm_host->ice_clk_rate == clk_rate)
return 0;
pr_debug("%s: changing ICE clk rate to %u\n",
mmc_hostname(host->mmc), clk_rate);
ret = clk_set_rate(msm_host->ice_clk, clk_rate);
if (ret) {
pr_err("%s: ICE_CLK rate set failed (%d) for %u\n",
mmc_hostname(host->mmc), ret, clk_rate);
return ret;
}
msm_host->ice_clk_rate = clk_rate;
}
return 0;
}
static struct sdhci_ops sdhci_msm_ops = {
.crypto_engine_cfg = sdhci_msm_ice_cfg,
.crypto_engine_cfg_end = sdhci_msm_ice_cfg_end,
.crypto_engine_reset = sdhci_msm_ice_reset,
.set_uhs_signaling = sdhci_msm_set_uhs_signaling,
.check_power_status = sdhci_msm_check_power_status,
.platform_execute_tuning = sdhci_msm_execute_tuning,
@ -4598,8 +4742,10 @@ static void sdhci_set_default_hw_caps(struct sdhci_msm_host *msm_host,
/* keep track of the value in SDHCI_CAPABILITIES */
msm_host->caps_0 = caps;
if ((major == 1) && (minor >= 0x6b))
if ((major == 1) && (minor >= 0x6b)) {
host->cdr_support = true;
msm_host->ice_hci_support = true;
}
/* 7FF projects with 7nm DLL */
if ((major == 1) && ((minor == 0x6e) || (minor == 0x71) ||
@ -4629,6 +4775,89 @@ static bool sdhci_msm_is_bootdevice(struct device *dev)
return true;
}
static int sdhci_msm_setup_ice_clk(struct sdhci_msm_host *msm_host,
struct platform_device *pdev)
{
int ret = 0;
if (msm_host->ice.pdev) {
/* Setup SDC ICE clock */
msm_host->ice_clk = devm_clk_get(&pdev->dev, "ice_core_clk");
if (!IS_ERR(msm_host->ice_clk)) {
/* ICE core has only one clock frequency for now */
ret = clk_set_rate(msm_host->ice_clk,
msm_host->pdata->ice_clk_max);
if (ret) {
dev_err(&pdev->dev, "ICE_CLK rate set failed (%d) for %u\n",
ret,
msm_host->pdata->ice_clk_max);
return ret;
}
ret = clk_prepare_enable(msm_host->ice_clk);
if (ret)
return ret;
msm_host->ice_clk_rate =
msm_host->pdata->ice_clk_max;
}
}
return ret;
}
static int sdhci_msm_initialize_ice(struct sdhci_msm_host *msm_host,
struct platform_device *pdev,
struct sdhci_host *host)
{
int ret = 0;
if (msm_host->ice.pdev) {
ret = sdhci_msm_ice_init(host);
if (ret) {
dev_err(&pdev->dev, "%s: SDHCi ICE init failed (%d)\n",
mmc_hostname(host->mmc), ret);
return -EINVAL;
}
host->is_crypto_en = true;
msm_host->mmc->inlinecrypt_support = true;
/* Packed commands cannot be encrypted/decrypted using ICE */
msm_host->mmc->caps2 &= ~(MMC_CAP2_PACKED_WR |
MMC_CAP2_PACKED_WR_CONTROL);
}
return 0;
}
static int sdhci_msm_get_ice_device_vops(struct sdhci_host *host,
struct platform_device *pdev)
{
int ret = 0;
ret = sdhci_msm_ice_get_dev(host);
if (ret == -EPROBE_DEFER) {
/*
* SDHCI driver might be probed before ICE driver does.
* In that case we would like to return EPROBE_DEFER code
* in order to delay its probing.
*/
dev_err(&pdev->dev, "%s: required ICE device not probed yet err = %d\n",
__func__, ret);
} else if (ret == -ENODEV) {
/*
* ICE device is not enabled in DTS file. No need for further
* initialization of ICE driver.
*/
dev_warn(&pdev->dev, "%s: ICE device is not enabled\n",
__func__);
ret = 0;
} else if (ret) {
dev_err(&pdev->dev, "%s: sdhci_msm_ice_get_dev failed %d\n",
__func__, ret);
}
return ret;
}
static int sdhci_msm_probe(struct platform_device *pdev)
{
const struct sdhci_msm_offset *msm_host_offset;
@ -4673,6 +4902,11 @@ static int sdhci_msm_probe(struct platform_device *pdev)
msm_host->mmc = host->mmc;
msm_host->pdev = pdev;
/* get the ice device vops if present */
ret = sdhci_msm_get_ice_device_vops(host, pdev);
if (ret)
goto out_host_free;
/* Extract platform data */
if (pdev->dev.of_node) {
ret = of_alias_get_id(pdev->dev.of_node, "sdhc");
@ -4747,6 +4981,10 @@ static int sdhci_msm_probe(struct platform_device *pdev)
}
}
ret = sdhci_msm_setup_ice_clk(msm_host, pdev);
if (ret)
goto pclk_disable;
/* Setup SDC MMC clock */
msm_host->clk = devm_clk_get(&pdev->dev, "core_clk");
if (IS_ERR(msm_host->clk)) {
@ -4982,6 +5220,12 @@ static int sdhci_msm_probe(struct platform_device *pdev)
if (msm_host->pdata->nonhotplug)
msm_host->mmc->caps2 |= MMC_CAP2_NONHOTPLUG;
/* Initialize ICE if present */
ret = sdhci_msm_initialize_ice(msm_host, pdev, host);
if (ret == -EINVAL)
goto vreg_deinit;
init_completion(&msm_host->pwr_irq_completion);
if (gpio_is_valid(msm_host->pdata->status_gpio)) {
@ -5258,6 +5502,7 @@ static int sdhci_msm_runtime_suspend(struct device *dev)
struct sdhci_host *host = dev_get_drvdata(dev);
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int ret;
if (host->mmc->card && mmc_card_sdio(host->mmc->card))
goto defer_disable_host_irq;
@ -5276,6 +5521,14 @@ defer_disable_host_irq:
if (msm_host->msm_bus_vote.client_handle)
sdhci_msm_bus_cancel_work_and_set_vote(host, 0);
}
if (host->is_crypto_en) {
ret = sdhci_msm_ice_suspend(host);
if (ret < 0)
pr_err("%s: failed to suspend crypto engine %d\n",
mmc_hostname(host->mmc), ret);
}
return 0;
}
@ -5284,6 +5537,21 @@ static int sdhci_msm_runtime_resume(struct device *dev)
struct sdhci_host *host = dev_get_drvdata(dev);
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int ret;
if (host->is_crypto_en) {
ret = sdhci_msm_enable_controller_clock(host);
if (ret) {
pr_err("%s: Failed to enable reqd clocks\n",
mmc_hostname(host->mmc));
goto skip_ice_resume;
}
ret = sdhci_msm_ice_resume(host);
if (ret)
pr_err("%s: failed to resume crypto engine %d\n",
mmc_hostname(host->mmc), ret);
}
skip_ice_resume:
if (host->mmc->card && mmc_card_sdio(host->mmc->card))
goto defer_enable_host_irq;

View file

@ -146,6 +146,10 @@ struct sdhci_msm_pltfm_data {
struct sdhci_msm_pm_qos_data pm_qos_data;
u32 *bus_clk_table;
unsigned char bus_clk_cnt;
u32 *sup_ice_clk_table;
unsigned char sup_ice_clk_cnt;
u32 ice_clk_max;
u32 ice_clk_min;
};
struct sdhci_msm_bus_vote {
@ -199,9 +203,17 @@ struct sdhci_msm_debug_data {
struct sdhci_host copy_host;
};
struct sdhci_msm_ice_data {
struct qcom_ice_variant_ops *vops;
struct platform_device *pdev;
int state;
};
struct sdhci_msm_host {
struct platform_device *pdev;
void __iomem *core_mem; /* MSM SDCC mapped address */
void __iomem *cryptoio; /* ICE HCI mapped address */
bool ice_hci_support;
int pwr_irq; /* power irq */
struct clk *clk; /* main SD/MMC bus clock */
struct clk *pclk; /* SDHC peripheral bus clock */
@ -209,6 +221,7 @@ struct sdhci_msm_host {
struct clk *bus_clk; /* SDHC bus voter clock */
struct clk *ff_clk; /* CDC calibration fixed feedback clock */
struct clk *sleep_clk; /* CDC calibration sleep clock */
struct clk *ice_clk; /* SDHC peripheral ICE clock */
atomic_t clks_on; /* Set if clocks are enabled */
struct sdhci_msm_pltfm_data *pdata;
struct mmc_host *mmc;
@ -250,6 +263,8 @@ struct sdhci_msm_host {
int soc_min_rev;
struct workqueue_struct *pm_qos_wq;
struct sdhci_msm_dll_hsr *dll_hsr;
struct sdhci_msm_ice_data ice;
u32 ice_clk_rate;
};
extern char *saved_command_line;

View file

@ -301,6 +301,8 @@ static void sdhci_do_reset(struct sdhci_host *host, u8 mask)
/* Resetting the controller clears many */
host->preset_enabled = false;
}
if (host->is_crypto_en)
host->crypto_reset_reqd = true;
}
static void sdhci_set_default_irqs(struct sdhci_host *host)
@ -1868,6 +1870,49 @@ static int sdhci_get_tuning_cmd(struct sdhci_host *host)
return MMC_SEND_TUNING_BLOCK;
}
static int sdhci_crypto_cfg(struct sdhci_host *host, struct mmc_request *mrq,
u32 slot)
{
int err = 0;
if (host->crypto_reset_reqd && host->ops->crypto_engine_reset) {
err = host->ops->crypto_engine_reset(host);
if (err) {
pr_err("%s: crypto reset failed\n",
mmc_hostname(host->mmc));
goto out;
}
host->crypto_reset_reqd = false;
}
if (host->ops->crypto_engine_cfg) {
err = host->ops->crypto_engine_cfg(host, mrq, slot);
if (err) {
pr_err("%s: failed to configure crypto\n",
mmc_hostname(host->mmc));
goto out;
}
}
out:
return err;
}
static int sdhci_crypto_cfg_end(struct sdhci_host *host,
struct mmc_request *mrq)
{
int err = 0;
if (host->ops->crypto_engine_cfg_end) {
err = host->ops->crypto_engine_cfg_end(host, mrq);
if (err) {
pr_err("%s: failed to configure crypto\n",
mmc_hostname(host->mmc));
return err;
}
}
return 0;
}
static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
{
struct sdhci_host *host;
@ -1934,6 +1979,13 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
sdhci_get_tuning_cmd(host));
}
if (host->is_crypto_en) {
spin_unlock_irqrestore(&host->lock, flags);
if (sdhci_crypto_cfg(host, mrq, 0))
goto end_req;
spin_lock_irqsave(&host->lock, flags);
}
if (mrq->sbc && !(host->flags & SDHCI_AUTO_CMD23))
sdhci_send_command(host, mrq->sbc);
else
@ -1943,6 +1995,11 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
mmiowb();
spin_unlock_irqrestore(&host->lock, flags);
return;
end_req:
mrq->cmd->error = -EIO;
if (mrq->data)
mrq->data->error = -EIO;
mmc_request_done(host->mmc, mrq);
}
void sdhci_set_bus_width(struct sdhci_host *host, int width)
@ -2985,6 +3042,7 @@ static bool sdhci_request_done(struct sdhci_host *host)
mmiowb();
spin_unlock_irqrestore(&host->lock, flags);
sdhci_crypto_cfg_end(host, mrq);
mmc_request_done(host->mmc, mrq);
return false;

View file

@ -669,6 +669,8 @@ struct sdhci_host {
enum sdhci_power_policy power_policy;
bool sdio_irq_async_status;
bool is_crypto_en;
bool crypto_reset_reqd;
u32 auto_cmd_err_sts;
struct ratelimit_state dbg_dump_rs;
@ -709,6 +711,11 @@ struct sdhci_ops {
unsigned int (*get_ro)(struct sdhci_host *host);
void (*reset)(struct sdhci_host *host, u8 mask);
int (*platform_execute_tuning)(struct sdhci_host *host, u32 opcode);
int (*crypto_engine_cfg)(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot);
int (*crypto_engine_cfg_end)(struct sdhci_host *host,
struct mmc_request *mrq);
int (*crypto_engine_reset)(struct sdhci_host *host);
void (*set_uhs_signaling)(struct sdhci_host *host, unsigned int uhs);
void (*hw_reset)(struct sdhci_host *host);
void (*adma_workaround)(struct sdhci_host *host, u32 intmask);

View file

@ -164,6 +164,7 @@ struct mmc_request {
*/
void (*recovery_notifier)(struct mmc_request *);
struct mmc_host *host;
struct request *req;
/* Allow other commands during this ongoing data transfer or busy wait */
bool cap_cmd_during_tfr;