ATA changes for 6.20

- Cleanup IRQ masking in the handling of completed report zones
    commands (Niklas).
 
  - Improve the handling of Thunderbolt attached devices to speed up
    device removal (Henry).
 
  - Several patches to generalize the existing max_sec quirks to
    facilitates quirking the maximum command size of buggy drives, many
    of which have recently showed up with the recent increase of the
    default max_sectors block limit (Niklas).
 
  - Cleanup the ahci-platform and sata dt-bindings schema (Rob,
    Manivannan).
 
  - Improve device node scan in the ahci-dwc driver (Krzysztof).
 
  - Remove clang W=1 warnings with the ahci-imx and ahci-xgene drivers
    (Krzysztof).
 
  - Fix a long standing potential command starvation situation with
    non-NCQ commands issued when NCQ commands are on-going (me).
 
  - Limit max_sectors to 8191 on the INTEL SSDSC2KG480G8 SSD (Niklas).
 
  - Remove Vesa Local Bus (VLB) support in the pata_legacy driver
    (Ethan).
 
  - Simple fixes in the pata_cypress (typo) and pata_ftide010 (timing)
    drivers (Ethan, Linus W.)
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSRPv8tYSvhwAzJdzjdoc3SxdoYdgUCaY5uKwAKCRDdoc3SxdoY
 dvn1AQCyhAHcegeAuQLL9L6pTdtKmObR0AOeeTkqOvGOWdb4agD+OVCeivi7KPBL
 zwzaJ5BhvwOS8FTiZzd+KHVpAQ0LtQk=
 =HvkS
 -----END PGP SIGNATURE-----

Merge tag 'ata-6.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/libata/linux

Pull ATA updates from Damien Le Moal:

 - Cleanup IRQ masking in the handling of completed report zones
   commands (Niklas)

 - Improve the handling of Thunderbolt attached devices to speed up
   device removal (Henry)

 - Several patches to generalize the existing max_sec quirks to
   facilitates quirking the maximum command size of buggy drives, many
   of which have recently showed up with the recent increase of the
   default max_sectors block limit (Niklas)

 - Cleanup the ahci-platform and sata dt-bindings schema (Rob,
   Manivannan)

 - Improve device node scan in the ahci-dwc driver (Krzysztof)

 - Remove clang W=1 warnings with the ahci-imx and ahci-xgene drivers
   (Krzysztof)

 - Fix a long standing potential command starvation situation with
   non-NCQ commands issued when NCQ commands are on-going (me)

 - Limit max_sectors to 8191 on the INTEL SSDSC2KG480G8 SSD (Niklas)

 - Remove Vesa Local Bus (VLB) support in the pata_legacy driver (Ethan)

 - Simple fixes in the pata_cypress (typo) and pata_ftide010 (timing)
   drivers (Ethan, Linus W)

* tag 'ata-6.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/libata/linux:
  ata: pata_ftide010: Fix some DMA timings
  ata: pata_cypress: fix typo in error message
  ata: pata_legacy: remove VLB support
  ata: libata-core: Quirk INTEL SSDSC2KG480G8 max_sectors
  dt-bindings: ata: sata: Document the graph port
  ata: libata-scsi: avoid Non-NCQ command starvation
  ata: libata-scsi: refactor ata_scsi_translate()
  ata: ahci-xgene: Fix Wvoid-pointer-to-enum-cast warning
  ata: ahci-imx: Fix Wvoid-pointer-to-enum-cast warning
  ata: ahci-dwc: Simplify with scoped for each OF child loop
  dt-bindings: ata: ahci-platform: Drop unnecessary select schema
  ata: libata: Allow more quirks
  ata: libata: Add libata.force parameter max_sec
  ata: libata: Add support to parse equal sign in libata.force
  ata: libata: Change libata.force to use the generic ATA_QUIRK_MAX_SEC quirk
  ata: libata: Add ata_force_get_fe_for_dev() helper
  ata: libata: Add ATA_QUIRK_MAX_SEC and convert all device quirks
  ata: libata: avoid long timeouts on hot-unplugged SATA DAS
  ata: libata-scsi: Remove superfluous local_irq_save()
This commit is contained in:
Linus Torvalds 2026-02-12 17:12:43 -08:00
commit 2c75a8d92c
16 changed files with 404 additions and 1041 deletions

View file

@ -3468,6 +3468,11 @@ Kernel parameters
* [no]logdir: Enable or disable access to the general
purpose log directory.
* max_sec=<sectors>: Set the transfer size limit, in
number of 512-byte sectors, to the value specified in
<sectors>. The value specified in <sectors> has to be
a non-zero positive integer.
* max_sec_128: Set transfer size limit to 128 sectors.
* max_sec_1024: Set or clear transfer size limit to

View file

@ -18,26 +18,6 @@ maintainers:
- Hans de Goede <hdegoede@redhat.com>
- Jens Axboe <axboe@kernel.dk>
select:
properties:
compatible:
contains:
enum:
- brcm,iproc-ahci
- cavium,octeon-7130-ahci
- hisilicon,hisi-ahci
- ibm,476gtr-ahci
- marvell,armada-3700-ahci
- marvell,armada-8k-ahci
- marvell,berlin2q-ahci
- qcom,apq8064-ahci
- qcom,ipq806x-ahci
- socionext,uniphier-pro4-ahci
- socionext,uniphier-pxs2-ahci
- socionext,uniphier-pxs3-ahci
required:
- compatible
properties:
compatible:
oneOf:

View file

@ -54,4 +54,7 @@ $defs:
each port can have a Port Multiplier attached thus allowing to
access more than one drive by means of a single SATA port.
port:
$ref: /schemas/graph.yaml#/properties/port
...

View file

@ -1127,13 +1127,6 @@ config PATA_OF_PLATFORM
If unsure, say N.
config PATA_QDI
tristate "QDI VLB PATA support"
depends on ISA
select PATA_LEGACY
help
Support for QDI 6500 and 6580 PATA controllers on VESA local bus.
config PATA_RB532
tristate "RouterBoard 532 PATA CompactFlash support"
depends on MIKROTIK_RB532
@ -1152,14 +1145,6 @@ config PATA_RZ1000
If unsure, say N.
config PATA_WINBOND_VLB
tristate "Winbond W83759A VLB PATA support (Experimental)"
depends on ISA
select PATA_LEGACY
help
Support for the Winbond W83759A controller on Vesa Local Bus
systems.
config PATA_PARPORT
tristate "Parallel port IDE device support"
depends on PARPORT_PC
@ -1201,7 +1186,7 @@ config PATA_LEGACY
depends on (ISA || PCI) && HAS_IOPORT
select PATA_TIMINGS
help
This option enables support for ISA/VLB/PCI bus legacy PATA
This option enables support for ISA/PCI bus legacy PATA
ports and allows them to be accessed via the new ATA layer.
If unsure, say N.

View file

@ -260,7 +260,6 @@ static void ahci_dwc_init_timer(struct ahci_host_priv *hpriv)
static int ahci_dwc_init_dmacr(struct ahci_host_priv *hpriv)
{
struct ahci_dwc_host_priv *dpriv = hpriv->plat_data;
struct device_node *child;
void __iomem *port_mmio;
u32 port, dmacr, ts;
@ -271,14 +270,9 @@ static int ahci_dwc_init_dmacr(struct ahci_host_priv *hpriv)
* the HBA global reset so we can freely initialize it once until the
* next system reset.
*/
for_each_child_of_node(dpriv->pdev->dev.of_node, child) {
if (!of_device_is_available(child))
continue;
if (of_property_read_u32(child, "reg", &port)) {
of_node_put(child);
for_each_available_child_of_node_scoped(dpriv->pdev->dev.of_node, child) {
if (of_property_read_u32(child, "reg", &port))
return -EINVAL;
}
port_mmio = __ahci_port_base(hpriv, port);
dmacr = readl(port_mmio + AHCI_DWC_PORT_DMACR);

View file

@ -869,7 +869,7 @@ static int imx_ahci_probe(struct platform_device *pdev)
imxpriv->ahci_pdev = pdev;
imxpriv->no_device = false;
imxpriv->first_time = true;
imxpriv->type = (enum ahci_imx_type)device_get_match_data(dev);
imxpriv->type = (unsigned long)device_get_match_data(dev);
imxpriv->sata_clk = devm_clk_get(dev, "sata");
if (IS_ERR(imxpriv->sata_clk)) {

View file

@ -773,7 +773,7 @@ static int xgene_ahci_probe(struct platform_device *pdev)
}
if (dev->of_node) {
version = (enum xgene_ahci_version)of_device_get_match_data(dev);
version = (unsigned long)of_device_get_match_data(dev);
}
#ifdef CONFIG_ACPI
else {

View file

@ -76,18 +76,20 @@ static unsigned int ata_dev_init_params(struct ata_device *dev,
u16 heads, u16 sectors);
static unsigned int ata_dev_set_xfermode(struct ata_device *dev);
static void ata_dev_xfermask(struct ata_device *dev);
static unsigned int ata_dev_quirks(const struct ata_device *dev);
static u64 ata_dev_quirks(const struct ata_device *dev);
static u64 ata_dev_get_quirk_value(struct ata_device *dev, u64 quirk);
static DEFINE_IDA(ata_ida);
#ifdef CONFIG_ATA_FORCE
struct ata_force_param {
const char *name;
u64 value;
u8 cbl;
u8 spd_limit;
unsigned int xfer_mask;
unsigned int quirk_on;
unsigned int quirk_off;
u64 quirk_on;
u64 quirk_off;
unsigned int pflags_on;
u16 lflags_on;
u16 lflags_off;
@ -473,6 +475,33 @@ static void ata_force_xfermask(struct ata_device *dev)
}
}
static const struct ata_force_ent *
ata_force_get_fe_for_dev(struct ata_device *dev)
{
const struct ata_force_ent *fe;
int devno = dev->link->pmp + dev->devno;
int alt_devno = devno;
int i;
/* allow n.15/16 for devices attached to host port */
if (ata_is_host_link(dev->link))
alt_devno += 15;
for (i = 0; i < ata_force_tbl_size; i++) {
fe = &ata_force_tbl[i];
if (fe->port != -1 && fe->port != dev->link->ap->print_id)
continue;
if (fe->device != -1 && fe->device != devno &&
fe->device != alt_devno)
continue;
return fe;
}
return NULL;
}
/**
* ata_force_quirks - force quirks according to libata.force
* @dev: ATA device of interest
@ -486,34 +515,19 @@ static void ata_force_xfermask(struct ata_device *dev)
*/
static void ata_force_quirks(struct ata_device *dev)
{
int devno = dev->link->pmp + dev->devno;
int alt_devno = devno;
int i;
const struct ata_force_ent *fe = ata_force_get_fe_for_dev(dev);
/* allow n.15/16 for devices attached to host port */
if (ata_is_host_link(dev->link))
alt_devno += 15;
if (!fe)
return;
for (i = 0; i < ata_force_tbl_size; i++) {
const struct ata_force_ent *fe = &ata_force_tbl[i];
if (!(~dev->quirks & fe->param.quirk_on) &&
!(dev->quirks & fe->param.quirk_off))
return;
if (fe->port != -1 && fe->port != dev->link->ap->print_id)
continue;
dev->quirks |= fe->param.quirk_on;
dev->quirks &= ~fe->param.quirk_off;
if (fe->device != -1 && fe->device != devno &&
fe->device != alt_devno)
continue;
if (!(~dev->quirks & fe->param.quirk_on) &&
!(dev->quirks & fe->param.quirk_off))
continue;
dev->quirks |= fe->param.quirk_on;
dev->quirks &= ~fe->param.quirk_off;
ata_dev_notice(dev, "FORCE: modified (%s)\n",
fe->param.name);
}
ata_dev_notice(dev, "FORCE: modified (%s)\n", fe->param.name);
}
#else
static inline void ata_force_pflags(struct ata_port *ap) { }
@ -2358,6 +2372,24 @@ static bool ata_dev_check_adapter(struct ata_device *dev,
return false;
}
bool ata_adapter_is_online(struct ata_port *ap)
{
struct device *dev;
if (!ap || !ap->host)
return false;
dev = ap->host->dev;
if (!dev)
return false;
if (dev_is_pci(dev) &&
pci_channel_offline(to_pci_dev(dev)))
return false;
return true;
}
static int ata_dev_config_ncq(struct ata_device *dev,
char *desc, size_t desc_sz)
{
@ -3144,17 +3176,10 @@ int ata_dev_configure(struct ata_device *dev)
dev->quirks |= ATA_QUIRK_STUCK_ERR;
}
if (dev->quirks & ATA_QUIRK_MAX_SEC_128)
dev->max_sectors = min_t(unsigned int, ATA_MAX_SECTORS_128,
dev->max_sectors);
if (dev->quirks & ATA_QUIRK_MAX_SEC_1024)
dev->max_sectors = min_t(unsigned int, ATA_MAX_SECTORS_1024,
dev->max_sectors);
if (dev->quirks & ATA_QUIRK_MAX_SEC_8191)
dev->max_sectors = min_t(unsigned int, ATA_MAX_SECTORS_8191,
dev->max_sectors);
if (dev->quirks & ATA_QUIRK_MAX_SEC)
dev->max_sectors = min_t(unsigned int, dev->max_sectors,
ata_dev_get_quirk_value(dev,
ATA_QUIRK_MAX_SEC));
if (dev->quirks & ATA_QUIRK_MAX_SEC_LBA48)
dev->max_sectors = ATA_MAX_SECTORS_LBA48;
@ -3986,7 +4011,6 @@ static const char * const ata_quirk_names[] = {
[__ATA_QUIRK_DIAGNOSTIC] = "diagnostic",
[__ATA_QUIRK_NODMA] = "nodma",
[__ATA_QUIRK_NONCQ] = "noncq",
[__ATA_QUIRK_MAX_SEC_128] = "maxsec128",
[__ATA_QUIRK_BROKEN_HPA] = "brokenhpa",
[__ATA_QUIRK_DISABLE] = "disable",
[__ATA_QUIRK_HPA_SIZE] = "hpasize",
@ -4007,8 +4031,7 @@ static const char * const ata_quirk_names[] = {
[__ATA_QUIRK_ZERO_AFTER_TRIM] = "zeroaftertrim",
[__ATA_QUIRK_NO_DMA_LOG] = "nodmalog",
[__ATA_QUIRK_NOTRIM] = "notrim",
[__ATA_QUIRK_MAX_SEC_1024] = "maxsec1024",
[__ATA_QUIRK_MAX_SEC_8191] = "maxsec8191",
[__ATA_QUIRK_MAX_SEC] = "maxsec",
[__ATA_QUIRK_MAX_TRIM_128M] = "maxtrim128m",
[__ATA_QUIRK_NO_NCQ_ON_ATI] = "noncqonati",
[__ATA_QUIRK_NO_LPM_ON_ATI] = "nolpmonati",
@ -4053,10 +4076,26 @@ static void ata_dev_print_quirks(const struct ata_device *dev,
kfree(str);
}
struct ata_dev_quirk_value {
const char *model_num;
const char *model_rev;
u64 val;
};
static const struct ata_dev_quirk_value __ata_dev_max_sec_quirks[] = {
{ "TORiSAN DVD-ROM DRD-N216", NULL, 128 },
{ "ST380013AS", "3.20", 1024 },
{ "LITEON CX1-JB*-HP", NULL, 1024 },
{ "LITEON EP1-*", NULL, 1024 },
{ "DELLBOSS VD", "MV.R00-0", 8191 },
{ "INTEL SSDSC2KG480G8", "XCV10120", 8191 },
{ },
};
struct ata_dev_quirks_entry {
const char *model_num;
const char *model_rev;
unsigned int quirks;
u64 quirks;
};
static const struct ata_dev_quirks_entry __ata_dev_quirks[] = {
@ -4097,7 +4136,7 @@ static const struct ata_dev_quirks_entry __ata_dev_quirks[] = {
{ "ASMT109x- Config", NULL, ATA_QUIRK_DISABLE },
/* Weird ATAPI devices */
{ "TORiSAN DVD-ROM DRD-N216", NULL, ATA_QUIRK_MAX_SEC_128 },
{ "TORiSAN DVD-ROM DRD-N216", NULL, ATA_QUIRK_MAX_SEC },
{ "QUANTUM DAT DAT72-000", NULL, ATA_QUIRK_ATAPI_MOD16_DMA },
{ "Slimtype DVD A DS8A8SH", NULL, ATA_QUIRK_MAX_SEC_LBA48 },
{ "Slimtype DVD A DS8A9SH", NULL, ATA_QUIRK_MAX_SEC_LBA48 },
@ -4106,20 +4145,20 @@ static const struct ata_dev_quirks_entry __ata_dev_quirks[] = {
* Causes silent data corruption with higher max sects.
* http://lkml.kernel.org/g/x49wpy40ysk.fsf@segfault.boston.devel.redhat.com
*/
{ "ST380013AS", "3.20", ATA_QUIRK_MAX_SEC_1024 },
{ "ST380013AS", "3.20", ATA_QUIRK_MAX_SEC },
/*
* These devices time out with higher max sects.
* https://bugzilla.kernel.org/show_bug.cgi?id=121671
*/
{ "LITEON CX1-JB*-HP", NULL, ATA_QUIRK_MAX_SEC_1024 },
{ "LITEON EP1-*", NULL, ATA_QUIRK_MAX_SEC_1024 },
{ "LITEON CX1-JB*-HP", NULL, ATA_QUIRK_MAX_SEC },
{ "LITEON EP1-*", NULL, ATA_QUIRK_MAX_SEC },
/*
* These devices time out with higher max sects.
* https://bugzilla.kernel.org/show_bug.cgi?id=220693
*/
{ "DELLBOSS VD", "MV.R00-0", ATA_QUIRK_MAX_SEC_8191 },
{ "DELLBOSS VD", "MV.R00-0", ATA_QUIRK_MAX_SEC },
/* Devices we expect to fail diagnostics */
@ -4307,6 +4346,8 @@ static const struct ata_dev_quirks_entry __ata_dev_quirks[] = {
{ "Micron*", NULL, ATA_QUIRK_ZERO_AFTER_TRIM },
{ "Crucial*", NULL, ATA_QUIRK_ZERO_AFTER_TRIM },
{ "INTEL SSDSC2KG480G8", "XCV10120", ATA_QUIRK_ZERO_AFTER_TRIM |
ATA_QUIRK_MAX_SEC },
{ "INTEL*SSD*", NULL, ATA_QUIRK_ZERO_AFTER_TRIM },
{ "SSD*INTEL*", NULL, ATA_QUIRK_ZERO_AFTER_TRIM },
{ "Samsung*SSD*", NULL, ATA_QUIRK_ZERO_AFTER_TRIM },
@ -4348,14 +4389,14 @@ static const struct ata_dev_quirks_entry __ata_dev_quirks[] = {
{ }
};
static unsigned int ata_dev_quirks(const struct ata_device *dev)
static u64 ata_dev_quirks(const struct ata_device *dev)
{
unsigned char model_num[ATA_ID_PROD_LEN + 1];
unsigned char model_rev[ATA_ID_FW_REV_LEN + 1];
const struct ata_dev_quirks_entry *ad = __ata_dev_quirks;
/* dev->quirks is an unsigned int. */
BUILD_BUG_ON(__ATA_QUIRK_MAX > 32);
/* dev->quirks is an u64. */
BUILD_BUG_ON(__ATA_QUIRK_MAX > 64);
ata_id_c_string(dev->id, model_num, ATA_ID_PROD, sizeof(model_num));
ata_id_c_string(dev->id, model_rev, ATA_ID_FW_REV, sizeof(model_rev));
@ -4372,6 +4413,48 @@ static unsigned int ata_dev_quirks(const struct ata_device *dev)
return 0;
}
static u64 ata_dev_get_max_sec_quirk_value(struct ata_device *dev)
{
unsigned char model_num[ATA_ID_PROD_LEN + 1];
unsigned char model_rev[ATA_ID_FW_REV_LEN + 1];
const struct ata_dev_quirk_value *ad = __ata_dev_max_sec_quirks;
u64 val = 0;
#ifdef CONFIG_ATA_FORCE
const struct ata_force_ent *fe = ata_force_get_fe_for_dev(dev);
if (fe && (fe->param.quirk_on & ATA_QUIRK_MAX_SEC) && fe->param.value)
val = fe->param.value;
#endif
if (val)
goto out;
ata_id_c_string(dev->id, model_num, ATA_ID_PROD, sizeof(model_num));
ata_id_c_string(dev->id, model_rev, ATA_ID_FW_REV, sizeof(model_rev));
while (ad->model_num) {
if (glob_match(ad->model_num, model_num) &&
(!ad->model_rev || glob_match(ad->model_rev, model_rev))) {
val = ad->val;
break;
}
ad++;
}
out:
ata_dev_warn(dev, "%s quirk is using value: %llu\n",
ata_quirk_names[__ATA_QUIRK_MAX_SEC], val);
return val;
}
static u64 ata_dev_get_quirk_value(struct ata_device *dev, u64 quirk)
{
if (quirk == ATA_QUIRK_MAX_SEC)
return ata_dev_get_max_sec_quirk_value(dev);
return 0;
}
static bool ata_dev_nodma(const struct ata_device *dev)
{
/*
@ -5082,6 +5165,12 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
qc->flags |= ATA_QCFLAG_ACTIVE;
ap->qc_active |= 1ULL << qc->tag;
/* Make sure the device is still accessible. */
if (!ata_adapter_is_online(ap)) {
qc->err_mask |= AC_ERR_HOST_BUS;
goto sys_err;
}
/*
* We guarantee to LLDs that they will have at least one
* non-zero sg if the command is a data command.
@ -5567,6 +5656,7 @@ struct ata_port *ata_port_alloc(struct ata_host *host)
mutex_init(&ap->scsi_scan_mutex);
INIT_DELAYED_WORK(&ap->hotplug_task, ata_scsi_hotplug);
INIT_DELAYED_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan);
INIT_WORK(&ap->deferred_qc_work, ata_scsi_deferred_qc_work);
INIT_LIST_HEAD(&ap->eh_done_q);
init_waitqueue_head(&ap->eh_wait_q);
init_completion(&ap->park_req_pending);
@ -6179,6 +6269,10 @@ static void ata_port_detach(struct ata_port *ap)
}
}
/* Make sure the deferred qc work finished. */
cancel_work_sync(&ap->deferred_qc_work);
WARN_ON(ap->deferred_qc);
/* Tell EH to disable all devices */
ap->pflags |= ATA_PFLAG_UNLOADING;
ata_port_schedule_eh(ap);
@ -6405,10 +6499,21 @@ EXPORT_SYMBOL_GPL(ata_platform_remove_one);
#define force_quirk_on(name, flag) \
{ #name, .quirk_on = (flag) }
#define force_quirk_val(name, flag, val) \
{ #name, .quirk_on = (flag), \
.value = (val) }
#define force_quirk_onoff(name, flag) \
{ "no" #name, .quirk_on = (flag) }, \
{ #name, .quirk_off = (flag) }
/*
* If the ata_force_param struct member 'name' ends with '=', then the value
* after the equal sign will be parsed as an u64, and will be saved in the
* ata_force_param struct member 'value'. This works because each libata.force
* entry (struct ata_force_ent) is separated by commas, so each entry represents
* a single quirk, and can thus only have a single value.
*/
static const struct ata_force_param force_tbl[] __initconst = {
force_cbl(40c, ATA_CBL_PATA40),
force_cbl(80c, ATA_CBL_PATA80),
@ -6479,8 +6584,9 @@ static const struct ata_force_param force_tbl[] __initconst = {
force_quirk_onoff(iddevlog, ATA_QUIRK_NO_ID_DEV_LOG),
force_quirk_onoff(logdir, ATA_QUIRK_NO_LOG_DIR),
force_quirk_on(max_sec_128, ATA_QUIRK_MAX_SEC_128),
force_quirk_on(max_sec_1024, ATA_QUIRK_MAX_SEC_1024),
force_quirk_val(max_sec_128, ATA_QUIRK_MAX_SEC, 128),
force_quirk_val(max_sec_1024, ATA_QUIRK_MAX_SEC, 1024),
force_quirk_on(max_sec=, ATA_QUIRK_MAX_SEC),
force_quirk_on(max_sec_lba48, ATA_QUIRK_MAX_SEC_LBA48),
force_quirk_onoff(lpm, ATA_QUIRK_NOLPM),
@ -6496,8 +6602,9 @@ static int __init ata_parse_force_one(char **cur,
const char **reason)
{
char *start = *cur, *p = *cur;
char *id, *val, *endp;
char *id, *val, *endp, *equalsign, *char_after_equalsign;
const struct ata_force_param *match_fp = NULL;
u64 val_after_equalsign;
int nr_matches = 0, i;
/* find where this param ends and update *cur */
@ -6540,10 +6647,36 @@ static int __init ata_parse_force_one(char **cur,
}
parse_val:
/* parse val, allow shortcuts so that both 1.5 and 1.5Gbps work */
equalsign = strchr(val, '=');
if (equalsign) {
char_after_equalsign = equalsign + 1;
if (!strlen(char_after_equalsign) ||
kstrtoull(char_after_equalsign, 10, &val_after_equalsign)) {
*reason = "invalid value after equal sign";
return -EINVAL;
}
}
/* Parse the parameter value. */
for (i = 0; i < ARRAY_SIZE(force_tbl); i++) {
const struct ata_force_param *fp = &force_tbl[i];
/*
* If val contains equal sign, match has to be exact, i.e.
* shortcuts are not supported.
*/
if (equalsign &&
(strncasecmp(val, fp->name,
char_after_equalsign - val) == 0)) {
force_ent->param = *fp;
force_ent->param.value = val_after_equalsign;
return 0;
}
/*
* If val does not contain equal sign, allow shortcuts so that
* both 1.5 and 1.5Gbps work.
*/
if (strncasecmp(val, fp->name, strlen(val)))
continue;

View file

@ -736,7 +736,8 @@ void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap)
spin_unlock_irqrestore(ap->lock, flags);
/* invoke EH, skip if unloading or suspended */
if (!(ap->pflags & (ATA_PFLAG_UNLOADING | ATA_PFLAG_SUSPENDED)))
if (!(ap->pflags & (ATA_PFLAG_UNLOADING | ATA_PFLAG_SUSPENDED)) &&
ata_adapter_is_online(ap))
ap->ops->error_handler(ap);
else {
/* if unloading, commence suicide */
@ -917,6 +918,12 @@ static void ata_eh_set_pending(struct ata_port *ap, bool fastdrain)
ap->pflags |= ATA_PFLAG_EH_PENDING;
/*
* If we have a deferred qc, requeue it so that it is retried once EH
* completes.
*/
ata_scsi_requeue_deferred_qc(ap);
if (!fastdrain)
return;

View file

@ -1658,8 +1658,77 @@ static void ata_qc_done(struct ata_queued_cmd *qc)
done(cmd);
}
void ata_scsi_deferred_qc_work(struct work_struct *work)
{
struct ata_port *ap =
container_of(work, struct ata_port, deferred_qc_work);
struct ata_queued_cmd *qc;
unsigned long flags;
spin_lock_irqsave(ap->lock, flags);
/*
* If we still have a deferred qc and we are not in EH, issue it. In
* such case, we should not need any more deferring the qc, so warn if
* qc_defer() says otherwise.
*/
qc = ap->deferred_qc;
if (qc && !ata_port_eh_scheduled(ap)) {
WARN_ON_ONCE(ap->ops->qc_defer(qc));
ap->deferred_qc = NULL;
ata_qc_issue(qc);
}
spin_unlock_irqrestore(ap->lock, flags);
}
void ata_scsi_requeue_deferred_qc(struct ata_port *ap)
{
struct ata_queued_cmd *qc = ap->deferred_qc;
struct scsi_cmnd *scmd;
lockdep_assert_held(ap->lock);
/*
* If we have a deferred qc when a reset occurs or NCQ commands fail,
* do not try to be smart about what to do with this deferred command
* and simply retry it by completing it with DID_SOFT_ERROR.
*/
if (!qc)
return;
scmd = qc->scsicmd;
ap->deferred_qc = NULL;
ata_qc_free(qc);
scmd->result = (DID_SOFT_ERROR << 16);
scsi_done(scmd);
}
static void ata_scsi_schedule_deferred_qc(struct ata_port *ap)
{
struct ata_queued_cmd *qc = ap->deferred_qc;
lockdep_assert_held(ap->lock);
/*
* If we have a deferred qc, then qc_defer() is defined and we can use
* this callback to determine if this qc is good to go, unless EH has
* been scheduled.
*/
if (!qc)
return;
if (ata_port_eh_scheduled(ap)) {
ata_scsi_requeue_deferred_qc(ap);
return;
}
if (!ap->ops->qc_defer(qc))
queue_work(system_highpri_wq, &ap->deferred_qc_work);
}
static void ata_scsi_qc_complete(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
struct scsi_cmnd *cmd = qc->scsicmd;
u8 *cdb = cmd->cmnd;
bool have_sense = qc->flags & ATA_QCFLAG_SENSE_VALID;
@ -1689,6 +1758,66 @@ static void ata_scsi_qc_complete(struct ata_queued_cmd *qc)
}
ata_qc_done(qc);
ata_scsi_schedule_deferred_qc(ap);
}
static int ata_scsi_qc_issue(struct ata_port *ap, struct ata_queued_cmd *qc)
{
int ret;
if (!ap->ops->qc_defer)
goto issue;
/*
* If we already have a deferred qc, then rely on the SCSI layer to
* requeue and defer all incoming commands until the deferred qc is
* processed, once all on-going commands complete.
*/
if (ap->deferred_qc) {
ata_qc_free(qc);
return SCSI_MLQUEUE_DEVICE_BUSY;
}
/* Check if the command needs to be deferred. */
ret = ap->ops->qc_defer(qc);
switch (ret) {
case 0:
break;
case ATA_DEFER_LINK:
ret = SCSI_MLQUEUE_DEVICE_BUSY;
break;
case ATA_DEFER_PORT:
ret = SCSI_MLQUEUE_HOST_BUSY;
break;
default:
WARN_ON_ONCE(1);
ret = SCSI_MLQUEUE_HOST_BUSY;
break;
}
if (ret) {
/*
* We must defer this qc: if this is not an NCQ command, keep
* this qc as a deferred one and report to the SCSI layer that
* we issued it so that it is not requeued. The deferred qc will
* be issued with the port deferred_qc_work once all on-going
* commands complete.
*/
if (!ata_is_ncq(qc->tf.protocol)) {
ap->deferred_qc = qc;
return 0;
}
/* Force a requeue of the command to defer its execution. */
ata_qc_free(qc);
return ret;
}
issue:
ata_qc_issue(qc);
return 0;
}
/**
@ -1714,66 +1843,49 @@ static void ata_scsi_qc_complete(struct ata_queued_cmd *qc)
* spin_lock_irqsave(host lock)
*
* RETURNS:
* 0 on success, SCSI_ML_QUEUE_DEVICE_BUSY if the command
* needs to be deferred.
* 0 on success, SCSI_ML_QUEUE_DEVICE_BUSY or SCSI_MLQUEUE_HOST_BUSY if the
* command needs to be deferred.
*/
static int ata_scsi_translate(struct ata_device *dev, struct scsi_cmnd *cmd,
ata_xlat_func_t xlat_func)
{
struct ata_port *ap = dev->link->ap;
struct ata_queued_cmd *qc;
int rc;
lockdep_assert_held(ap->lock);
/*
* ata_scsi_qc_new() calls scsi_done(cmd) in case of failure. So we
* have nothing further to do when allocating a qc fails.
*/
qc = ata_scsi_qc_new(dev, cmd);
if (!qc)
goto err_mem;
return 0;
/* data is present; dma-map it */
if (cmd->sc_data_direction == DMA_FROM_DEVICE ||
cmd->sc_data_direction == DMA_TO_DEVICE) {
if (unlikely(scsi_bufflen(cmd) < 1)) {
ata_dev_warn(dev, "WARNING: zero len r/w req\n");
goto err_did;
cmd->result = (DID_ERROR << 16);
goto done;
}
ata_sg_init(qc, scsi_sglist(cmd), scsi_sg_count(cmd));
qc->dma_dir = cmd->sc_data_direction;
}
qc->complete_fn = ata_scsi_qc_complete;
if (xlat_func(qc))
goto early_finish;
goto done;
if (ap->ops->qc_defer) {
if ((rc = ap->ops->qc_defer(qc)))
goto defer;
}
return ata_scsi_qc_issue(ap, qc);
/* select device, send command to hardware */
ata_qc_issue(qc);
return 0;
early_finish:
done:
ata_qc_free(qc);
scsi_done(cmd);
return 0;
err_did:
ata_qc_free(qc);
cmd->result = (DID_ERROR << 16);
scsi_done(cmd);
err_mem:
return 0;
defer:
ata_qc_free(qc);
if (rc == ATA_DEFER_LINK)
return SCSI_MLQUEUE_DEVICE_BUSY;
else
return SCSI_MLQUEUE_HOST_BUSY;
}
/**
@ -2982,6 +3094,9 @@ ata_scsi_find_dev(struct ata_port *ap, const struct scsi_device *scsidev)
{
struct ata_device *dev = __ata_scsi_find_dev(ap, scsidev);
if (!ata_adapter_is_online(ap))
return NULL;
if (unlikely(!dev || !ata_dev_enabled(dev)))
return NULL;
@ -3573,13 +3688,13 @@ static void ata_scsi_report_zones_complete(struct ata_queued_cmd *qc)
{
struct scsi_cmnd *scmd = qc->scsicmd;
struct sg_mapping_iter miter;
unsigned long flags;
unsigned int bytes = 0;
lockdep_assert_held(qc->ap->lock);
sg_miter_start(&miter, scsi_sglist(scmd), scsi_sg_count(scmd),
SG_MITER_TO_SG | SG_MITER_ATOMIC);
local_irq_save(flags);
while (sg_miter_next(&miter)) {
unsigned int offset = 0;
@ -3627,7 +3742,6 @@ static void ata_scsi_report_zones_complete(struct ata_queued_cmd *qc)
}
}
sg_miter_stop(&miter);
local_irq_restore(flags);
ata_scsi_qc_complete(qc);
}

View file

@ -94,6 +94,7 @@ extern int atapi_check_dma(struct ata_queued_cmd *qc);
extern void swap_buf_le16(u16 *buf, unsigned int buf_words);
extern bool ata_phys_link_online(struct ata_link *link);
extern bool ata_phys_link_offline(struct ata_link *link);
bool ata_adapter_is_online(struct ata_port *ap);
extern void ata_dev_init(struct ata_device *dev);
extern void ata_link_init(struct ata_port *ap, struct ata_link *link, int pmp);
extern int sata_link_init_spd(struct ata_link *link);
@ -166,6 +167,8 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct queue_limits *lim,
struct ata_device *dev);
enum scsi_qc_status __ata_scsi_queuecmd(struct scsi_cmnd *scmd,
struct ata_device *dev);
void ata_scsi_deferred_qc_work(struct work_struct *work);
void ata_scsi_requeue_deferred_qc(struct ata_port *ap);
/* libata-eh.c */
extern unsigned int ata_internal_cmd_timeout(struct ata_device *dev, u8 cmd);

View file

@ -62,7 +62,7 @@ static void cy82c693_set_piomode(struct ata_port *ap, struct ata_device *adev)
u32 addr;
if (ata_timing_compute(adev, adev->pio_mode, &t, T, 1) < 0) {
ata_dev_err(adev, DRV_NAME ": mome computation failed.\n");
ata_dev_err(adev, DRV_NAME ": mode computation failed.\n");
return;
}

View file

@ -122,10 +122,10 @@ static const u8 mwdma_50_active_time[3] = {6, 2, 2};
static const u8 mwdma_50_recovery_time[3] = {6, 2, 1};
static const u8 mwdma_66_active_time[3] = {8, 3, 3};
static const u8 mwdma_66_recovery_time[3] = {8, 2, 1};
static const u8 udma_50_setup_time[6] = {3, 3, 2, 2, 1, 1};
static const u8 udma_50_setup_time[6] = {3, 3, 2, 2, 1, 9};
static const u8 udma_50_hold_time[6] = {3, 1, 1, 1, 1, 1};
static const u8 udma_66_setup_time[7] = {4, 4, 3, 2, };
static const u8 udma_66_hold_time[7] = {};
static const u8 udma_66_setup_time[7] = {4, 4, 3, 2, 1, 9, 9};
static const u8 udma_66_hold_time[7] = {4, 2, 1, 1, 1, 1, 1};
/*
* We set 66 MHz for all MWDMA modes

View file

@ -5,43 +5,10 @@
*
* An ATA driver for the legacy ATA ports.
*
* Data Sources:
* Opti 82C465/82C611 support: Data sheets at opti-inc.com
* HT6560 series:
* Promise 20230/20620:
* http://www.ryston.cz/petr/vlb/pdc20230b.html
* http://www.ryston.cz/petr/vlb/pdc20230c.html
* http://www.ryston.cz/petr/vlb/pdc20630.html
* QDI65x0:
* http://www.ryston.cz/petr/vlb/qd6500.html
* http://www.ryston.cz/petr/vlb/qd6580.html
*
* QDI65x0 probe code based on drivers/ide/legacy/qd65xx.c
* Rewritten from the work of Colten Edwards <pje120@cs.usask.ca> by
* Samuel Thibault <samuel.thibault@ens-lyon.org>
*
* Unsupported but docs exist:
* Appian/Adaptec AIC25VL01/Cirrus Logic PD7220
*
* This driver handles legacy (that is "ISA/VLB side") IDE ports found
* on PC class systems. There are three hybrid devices that are exceptions
* This driver handles legacy (that is "ISA side") IDE ports found
* on PC class systems. There are three hybrid devices that are exceptions:
* The Cyrix 5510/5520 where a pre SFF ATA device is on the bridge and
* the MPIIX where the tuning is PCI side but the IDE is "ISA side".
*
* Specific support is included for the ht6560a/ht6560b/opti82c611a/
* opti82c465mv/promise 20230c/20630/qdi65x0/winbond83759A
*
* Support for the Winbond 83759A when operating in advanced mode.
* Multichip mode is not currently supported.
*
* Use the autospeed and pio_mask options with:
* Appian ADI/2 aka CLPD7220 or AIC25VL01.
* Use the jumpers, autospeed and set pio_mask to the mode on the jumpers with
* Goldstar GM82C711, PIC-1288A-125, UMC 82C871F, Winbond W83759,
* Winbond W83759A, Promise PDC20230-B
*
* For now use autospeed and pio_mask as above with the W83759A. This may
* change.
*/
#include <linux/async.h>
@ -87,55 +54,9 @@ static int iordy_mask = 0xFFFFFFFF;
module_param(iordy_mask, int, 0);
MODULE_PARM_DESC(iordy_mask, "Use IORDY if available");
static int ht6560a;
module_param(ht6560a, int, 0);
MODULE_PARM_DESC(ht6560a, "HT 6560A on primary 1, second 2, both 3");
static int ht6560b;
module_param(ht6560b, int, 0);
MODULE_PARM_DESC(ht6560b, "HT 6560B on primary 1, secondary 2, both 3");
static int opti82c611a;
module_param(opti82c611a, int, 0);
MODULE_PARM_DESC(opti82c611a,
"Opti 82c611A on primary 1, secondary 2, both 3");
static int opti82c46x;
module_param(opti82c46x, int, 0);
MODULE_PARM_DESC(opti82c46x,
"Opti 82c465MV on primary 1, secondary 2, both 3");
#ifdef CONFIG_PATA_QDI_MODULE
static int qdi = 1;
#else
static int qdi;
#endif
module_param(qdi, int, 0);
MODULE_PARM_DESC(qdi, "Set to probe QDI controllers");
#ifdef CONFIG_PATA_WINBOND_VLB_MODULE
static int winbond = 1;
#else
static int winbond;
#endif
module_param(winbond, int, 0);
MODULE_PARM_DESC(winbond,
"Set to probe Winbond controllers, "
"give I/O port if non standard");
enum controller {
BIOS = 0,
SNOOP = 1,
PDC20230 = 2,
HT6560A = 3,
HT6560B = 4,
OPTI611A = 5,
OPTI46X = 6,
QDI6500 = 7,
QDI6580 = 8,
QDI6580DP = 9, /* Dual channel mode is different */
W83759A = 10,
UNKNOWN = -1
};
@ -183,7 +104,7 @@ static struct ata_host *legacy_host[NR_HOST];
*
* Add an entry into the probe list for ATA controllers. This is used
* to add the default ISA slots and then to build up the table
* further according to other ISA/VLB/Weird device scans
* further according to other ISA/Weird device scans
*
* An I/O port list is used to keep ordering stable and sane, as we
* don't have any good way to talk about ordering otherwise
@ -276,613 +197,11 @@ static struct ata_port_operations legacy_port_ops = {
.set_mode = legacy_set_mode,
};
/*
* Promise 20230C and 20620 support
*
* This controller supports PIO0 to PIO2. We set PIO timings
* conservatively to allow for 50MHz Vesa Local Bus. The 20620 DMA
* support is weird being DMA to controller and PIO'd to the host
* and not supported.
*/
static void pdc20230_set_piomode(struct ata_port *ap, struct ata_device *adev)
{
int tries = 5;
int pio = adev->pio_mode - XFER_PIO_0;
u8 rt;
unsigned long flags;
/* Safe as UP only. Force I/Os to occur together */
local_irq_save(flags);
/* Unlock the control interface */
do {
inb(0x1F5);
outb(inb(0x1F2) | 0x80, 0x1F2);
inb(0x1F2);
inb(0x3F6);
inb(0x3F6);
inb(0x1F2);
inb(0x1F2);
}
while ((inb(0x1F2) & 0x80) && --tries);
local_irq_restore(flags);
outb(inb(0x1F4) & 0x07, 0x1F4);
rt = inb(0x1F3);
rt &= ~(0x07 << (3 * !adev->devno));
if (pio)
rt |= (1 + 3 * pio) << (3 * !adev->devno);
outb(rt, 0x1F3);
udelay(100);
outb(inb(0x1F2) | 0x01, 0x1F2);
udelay(100);
inb(0x1F5);
}
static unsigned int pdc_data_xfer_vlb(struct ata_queued_cmd *qc,
unsigned char *buf, unsigned int buflen, int rw)
{
struct ata_device *dev = qc->dev;
struct ata_port *ap = dev->link->ap;
int slop = buflen & 3;
/* 32bit I/O capable *and* we need to write a whole number of dwords */
if (ata_id_has_dword_io(dev->id) && (slop == 0 || slop == 3)
&& (ap->pflags & ATA_PFLAG_PIO32)) {
unsigned long flags;
local_irq_save(flags);
/* Perform the 32bit I/O synchronization sequence */
ioread8(ap->ioaddr.nsect_addr);
ioread8(ap->ioaddr.nsect_addr);
ioread8(ap->ioaddr.nsect_addr);
/* Now the data */
if (rw == READ)
ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
else
iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
if (unlikely(slop)) {
__le32 pad = 0;
if (rw == READ) {
pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
memcpy(buf + buflen - slop, &pad, slop);
} else {
memcpy(&pad, buf + buflen - slop, slop);
iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
}
buflen += 4 - slop;
}
local_irq_restore(flags);
} else
buflen = ata_sff_data_xfer32(qc, buf, buflen, rw);
return buflen;
}
static struct ata_port_operations pdc20230_port_ops = {
.inherits = &legacy_base_port_ops,
.set_piomode = pdc20230_set_piomode,
.sff_data_xfer = pdc_data_xfer_vlb,
};
/*
* Holtek 6560A support
*
* This controller supports PIO0 to PIO2 (no IORDY even though higher
* timings can be loaded).
*/
static void ht6560a_set_piomode(struct ata_port *ap, struct ata_device *adev)
{
u8 active, recover;
struct ata_timing t;
/* Get the timing data in cycles. For now play safe at 50Mhz */
ata_timing_compute(adev, adev->pio_mode, &t, 20000, 1000);
active = clamp_val(t.active, 2, 15);
recover = clamp_val(t.recover, 4, 15);
inb(0x3E6);
inb(0x3E6);
inb(0x3E6);
inb(0x3E6);
iowrite8(recover << 4 | active, ap->ioaddr.device_addr);
ioread8(ap->ioaddr.status_addr);
}
static struct ata_port_operations ht6560a_port_ops = {
.inherits = &legacy_base_port_ops,
.set_piomode = ht6560a_set_piomode,
};
/*
* Holtek 6560B support
*
* This controller supports PIO0 to PIO4. We honour the BIOS/jumper FIFO
* setting unless we see an ATAPI device in which case we force it off.
*
* FIXME: need to implement 2nd channel support.
*/
static void ht6560b_set_piomode(struct ata_port *ap, struct ata_device *adev)
{
u8 active, recover;
struct ata_timing t;
/* Get the timing data in cycles. For now play safe at 50Mhz */
ata_timing_compute(adev, adev->pio_mode, &t, 20000, 1000);
active = clamp_val(t.active, 2, 15);
recover = clamp_val(t.recover, 2, 16) & 0x0F;
inb(0x3E6);
inb(0x3E6);
inb(0x3E6);
inb(0x3E6);
iowrite8(recover << 4 | active, ap->ioaddr.device_addr);
if (adev->class != ATA_DEV_ATA) {
u8 rconf = inb(0x3E6);
if (rconf & 0x24) {
rconf &= ~0x24;
outb(rconf, 0x3E6);
}
}
ioread8(ap->ioaddr.status_addr);
}
static struct ata_port_operations ht6560b_port_ops = {
.inherits = &legacy_base_port_ops,
.set_piomode = ht6560b_set_piomode,
};
/*
* Opti core chipset helpers
*/
/**
* opti_syscfg - read OPTI chipset configuration
* @reg: Configuration register to read
*
* Returns the value of an OPTI system board configuration register.
*/
static u8 opti_syscfg(u8 reg)
{
unsigned long flags;
u8 r;
/* Uniprocessor chipset and must force cycles adjancent */
local_irq_save(flags);
outb(reg, 0x22);
r = inb(0x24);
local_irq_restore(flags);
return r;
}
/*
* Opti 82C611A
*
* This controller supports PIO0 to PIO3.
*/
static void opti82c611a_set_piomode(struct ata_port *ap,
struct ata_device *adev)
{
u8 active, recover, setup;
struct ata_timing t;
struct ata_device *pair = ata_dev_pair(adev);
int clock;
int khz[4] = { 50000, 40000, 33000, 25000 };
u8 rc;
/* Enter configuration mode */
ioread16(ap->ioaddr.error_addr);
ioread16(ap->ioaddr.error_addr);
iowrite8(3, ap->ioaddr.nsect_addr);
/* Read VLB clock strapping */
clock = 1000000000 / khz[ioread8(ap->ioaddr.lbah_addr) & 0x03];
/* Get the timing data in cycles */
ata_timing_compute(adev, adev->pio_mode, &t, clock, 1000);
/* Setup timing is shared */
if (pair) {
struct ata_timing tp;
ata_timing_compute(pair, pair->pio_mode, &tp, clock, 1000);
ata_timing_merge(&t, &tp, &t, ATA_TIMING_SETUP);
}
active = clamp_val(t.active, 2, 17) - 2;
recover = clamp_val(t.recover, 1, 16) - 1;
setup = clamp_val(t.setup, 1, 4) - 1;
/* Select the right timing bank for write timing */
rc = ioread8(ap->ioaddr.lbal_addr);
rc &= 0x7F;
rc |= (adev->devno << 7);
iowrite8(rc, ap->ioaddr.lbal_addr);
/* Write the timings */
iowrite8(active << 4 | recover, ap->ioaddr.error_addr);
/* Select the right bank for read timings, also
load the shared timings for address */
rc = ioread8(ap->ioaddr.device_addr);
rc &= 0xC0;
rc |= adev->devno; /* Index select */
rc |= (setup << 4) | 0x04;
iowrite8(rc, ap->ioaddr.device_addr);
/* Load the read timings */
iowrite8(active << 4 | recover, ap->ioaddr.data_addr);
/* Ensure the timing register mode is right */
rc = ioread8(ap->ioaddr.lbal_addr);
rc &= 0x73;
rc |= 0x84;
iowrite8(rc, ap->ioaddr.lbal_addr);
/* Exit command mode */
iowrite8(0x83, ap->ioaddr.nsect_addr);
}
static struct ata_port_operations opti82c611a_port_ops = {
.inherits = &legacy_base_port_ops,
.set_piomode = opti82c611a_set_piomode,
};
/*
* Opti 82C465MV
*
* This controller supports PIO0 to PIO3. Unlike the 611A the MVB
* version is dual channel but doesn't have a lot of unique registers.
*/
static void opti82c46x_set_piomode(struct ata_port *ap, struct ata_device *adev)
{
u8 active, recover, setup;
struct ata_timing t;
struct ata_device *pair = ata_dev_pair(adev);
int clock;
int khz[4] = { 50000, 40000, 33000, 25000 };
u8 rc;
u8 sysclk;
/* Get the clock */
sysclk = (opti_syscfg(0xAC) & 0xC0) >> 6; /* BIOS set */
/* Enter configuration mode */
ioread16(ap->ioaddr.error_addr);
ioread16(ap->ioaddr.error_addr);
iowrite8(3, ap->ioaddr.nsect_addr);
/* Read VLB clock strapping */
clock = 1000000000 / khz[sysclk];
/* Get the timing data in cycles */
ata_timing_compute(adev, adev->pio_mode, &t, clock, 1000);
/* Setup timing is shared */
if (pair) {
struct ata_timing tp;
ata_timing_compute(pair, pair->pio_mode, &tp, clock, 1000);
ata_timing_merge(&t, &tp, &t, ATA_TIMING_SETUP);
}
active = clamp_val(t.active, 2, 17) - 2;
recover = clamp_val(t.recover, 1, 16) - 1;
setup = clamp_val(t.setup, 1, 4) - 1;
/* Select the right timing bank for write timing */
rc = ioread8(ap->ioaddr.lbal_addr);
rc &= 0x7F;
rc |= (adev->devno << 7);
iowrite8(rc, ap->ioaddr.lbal_addr);
/* Write the timings */
iowrite8(active << 4 | recover, ap->ioaddr.error_addr);
/* Select the right bank for read timings, also
load the shared timings for address */
rc = ioread8(ap->ioaddr.device_addr);
rc &= 0xC0;
rc |= adev->devno; /* Index select */
rc |= (setup << 4) | 0x04;
iowrite8(rc, ap->ioaddr.device_addr);
/* Load the read timings */
iowrite8(active << 4 | recover, ap->ioaddr.data_addr);
/* Ensure the timing register mode is right */
rc = ioread8(ap->ioaddr.lbal_addr);
rc &= 0x73;
rc |= 0x84;
iowrite8(rc, ap->ioaddr.lbal_addr);
/* Exit command mode */
iowrite8(0x83, ap->ioaddr.nsect_addr);
/* We need to know this for quad device on the MVB */
ap->host->private_data = ap;
}
/**
* opti82c46x_qc_issue - command issue
* @qc: command pending
*
* Called when the libata layer is about to issue a command. We wrap
* this interface so that we can load the correct ATA timings. The
* MVB has a single set of timing registers and these are shared
* across channels. As there are two registers we really ought to
* track the last two used values as a sort of register window. For
* now we just reload on a channel switch. On the single channel
* setup this condition never fires so we do nothing extra.
*
* FIXME: dual channel needs ->serialize support
*/
static unsigned int opti82c46x_qc_issue(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
struct ata_device *adev = qc->dev;
/* If timings are set and for the wrong channel (2nd test is
due to a libata shortcoming and will eventually go I hope) */
if (ap->host->private_data != ap->host
&& ap->host->private_data != NULL)
opti82c46x_set_piomode(ap, adev);
return ata_sff_qc_issue(qc);
}
static struct ata_port_operations opti82c46x_port_ops = {
.inherits = &legacy_base_port_ops,
.set_piomode = opti82c46x_set_piomode,
.qc_issue = opti82c46x_qc_issue,
};
/**
* qdi65x0_set_piomode - PIO setup for QDI65x0
* @ap: Port
* @adev: Device
*
* In single channel mode the 6580 has one clock per device and we can
* avoid the requirement to clock switch. We also have to load the timing
* into the right clock according to whether we are master or slave.
*
* In dual channel mode the 6580 has one clock per channel and we have
* to software clockswitch in qc_issue.
*/
static void qdi65x0_set_piomode(struct ata_port *ap, struct ata_device *adev)
{
struct ata_timing t;
struct legacy_data *ld_qdi = ap->host->private_data;
int active, recovery;
u8 timing;
/* Get the timing data in cycles */
ata_timing_compute(adev, adev->pio_mode, &t, 30303, 1000);
if (ld_qdi->fast) {
active = 8 - clamp_val(t.active, 1, 8);
recovery = 18 - clamp_val(t.recover, 3, 18);
} else {
active = 9 - clamp_val(t.active, 2, 9);
recovery = 15 - clamp_val(t.recover, 0, 15);
}
timing = (recovery << 4) | active | 0x08;
ld_qdi->clock[adev->devno] = timing;
if (ld_qdi->type == QDI6580)
outb(timing, ld_qdi->timing + 2 * adev->devno);
else
outb(timing, ld_qdi->timing + 2 * ap->port_no);
/* Clear the FIFO */
if (ld_qdi->type != QDI6500 && adev->class != ATA_DEV_ATA)
outb(0x5F, (ld_qdi->timing & 0xFFF0) + 3);
}
/**
* qdi_qc_issue - command issue
* @qc: command pending
*
* Called when the libata layer is about to issue a command. We wrap
* this interface so that we can load the correct ATA timings.
*/
static unsigned int qdi_qc_issue(struct ata_queued_cmd *qc)
{
struct ata_port *ap = qc->ap;
struct ata_device *adev = qc->dev;
struct legacy_data *ld_qdi = ap->host->private_data;
if (ld_qdi->clock[adev->devno] != ld_qdi->last) {
if (adev->pio_mode) {
ld_qdi->last = ld_qdi->clock[adev->devno];
outb(ld_qdi->clock[adev->devno], ld_qdi->timing +
2 * ap->port_no);
}
}
return ata_sff_qc_issue(qc);
}
static unsigned int vlb32_data_xfer(struct ata_queued_cmd *qc,
unsigned char *buf,
unsigned int buflen, int rw)
{
struct ata_device *adev = qc->dev;
struct ata_port *ap = adev->link->ap;
int slop = buflen & 3;
if (ata_id_has_dword_io(adev->id) && (slop == 0 || slop == 3)
&& (ap->pflags & ATA_PFLAG_PIO32)) {
if (rw == WRITE)
iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
else
ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
if (unlikely(slop)) {
__le32 pad = 0;
if (rw == WRITE) {
memcpy(&pad, buf + buflen - slop, slop);
iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
} else {
pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
memcpy(buf + buflen - slop, &pad, slop);
}
}
return (buflen + 3) & ~3;
} else
return ata_sff_data_xfer(qc, buf, buflen, rw);
}
static int qdi_port(struct platform_device *dev,
struct legacy_probe *lp, struct legacy_data *ld)
{
if (devm_request_region(&dev->dev, lp->private, 4, "qdi") == NULL)
return -EBUSY;
ld->timing = lp->private;
return 0;
}
static struct ata_port_operations qdi6500_port_ops = {
.inherits = &legacy_base_port_ops,
.set_piomode = qdi65x0_set_piomode,
.qc_issue = qdi_qc_issue,
.sff_data_xfer = vlb32_data_xfer,
};
static struct ata_port_operations qdi6580_port_ops = {
.inherits = &legacy_base_port_ops,
.set_piomode = qdi65x0_set_piomode,
.sff_data_xfer = vlb32_data_xfer,
};
static struct ata_port_operations qdi6580dp_port_ops = {
.inherits = &legacy_base_port_ops,
.set_piomode = qdi65x0_set_piomode,
.qc_issue = qdi_qc_issue,
.sff_data_xfer = vlb32_data_xfer,
};
static DEFINE_SPINLOCK(winbond_lock);
static void winbond_writecfg(unsigned long port, u8 reg, u8 val)
{
unsigned long flags;
spin_lock_irqsave(&winbond_lock, flags);
outb(reg, port + 0x01);
outb(val, port + 0x02);
spin_unlock_irqrestore(&winbond_lock, flags);
}
static u8 winbond_readcfg(unsigned long port, u8 reg)
{
u8 val;
unsigned long flags;
spin_lock_irqsave(&winbond_lock, flags);
outb(reg, port + 0x01);
val = inb(port + 0x02);
spin_unlock_irqrestore(&winbond_lock, flags);
return val;
}
static void winbond_set_piomode(struct ata_port *ap, struct ata_device *adev)
{
struct ata_timing t;
struct legacy_data *ld_winbond = ap->host->private_data;
int active, recovery;
u8 reg;
int timing = 0x88 + (ap->port_no * 4) + (adev->devno * 2);
reg = winbond_readcfg(ld_winbond->timing, 0x81);
/* Get the timing data in cycles */
if (reg & 0x40) /* Fast VLB bus, assume 50MHz */
ata_timing_compute(adev, adev->pio_mode, &t, 20000, 1000);
else
ata_timing_compute(adev, adev->pio_mode, &t, 30303, 1000);
active = (clamp_val(t.active, 3, 17) - 1) & 0x0F;
recovery = (clamp_val(t.recover, 1, 15) + 1) & 0x0F;
timing = (active << 4) | recovery;
winbond_writecfg(ld_winbond->timing, timing, reg);
/* Load the setup timing */
reg = 0x35;
if (adev->class != ATA_DEV_ATA)
reg |= 0x08; /* FIFO off */
if (!ata_pio_need_iordy(adev))
reg |= 0x02; /* IORDY off */
reg |= (clamp_val(t.setup, 0, 3) << 6);
winbond_writecfg(ld_winbond->timing, timing + 1, reg);
}
static int winbond_port(struct platform_device *dev,
struct legacy_probe *lp, struct legacy_data *ld)
{
if (devm_request_region(&dev->dev, lp->private, 4, "winbond") == NULL)
return -EBUSY;
ld->timing = lp->private;
return 0;
}
static struct ata_port_operations winbond_port_ops = {
.inherits = &legacy_base_port_ops,
.set_piomode = winbond_set_piomode,
.sff_data_xfer = vlb32_data_xfer,
};
static struct legacy_controller controllers[] = {
{"BIOS", &legacy_port_ops, ATA_PIO4,
ATA_FLAG_NO_IORDY, 0, NULL },
{"Snooping", &simple_port_ops, ATA_PIO4,
0, 0, NULL },
{"PDC20230", &pdc20230_port_ops, ATA_PIO2,
ATA_FLAG_NO_IORDY,
ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32CHANGE, NULL },
{"HT6560A", &ht6560a_port_ops, ATA_PIO2,
ATA_FLAG_NO_IORDY, 0, NULL },
{"HT6560B", &ht6560b_port_ops, ATA_PIO4,
ATA_FLAG_NO_IORDY, 0, NULL },
{"OPTI82C611A", &opti82c611a_port_ops, ATA_PIO3,
0, 0, NULL },
{"OPTI82C46X", &opti82c46x_port_ops, ATA_PIO3,
0, 0, NULL },
{"QDI6500", &qdi6500_port_ops, ATA_PIO2,
ATA_FLAG_NO_IORDY,
ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32CHANGE, qdi_port },
{"QDI6580", &qdi6580_port_ops, ATA_PIO4,
0, ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32CHANGE, qdi_port },
{"QDI6580DP", &qdi6580dp_port_ops, ATA_PIO4,
0, ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32CHANGE, qdi_port },
{"W83759A", &winbond_port_ops, ATA_PIO4,
0, ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32CHANGE,
winbond_port }
};
/**
@ -897,62 +216,6 @@ static __init int probe_chip_type(struct legacy_probe *probe)
{
int mask = 1 << probe->slot;
if (winbond && (probe->port == 0x1F0 || probe->port == 0x170)) {
u8 reg = winbond_readcfg(winbond, 0x81);
reg |= 0x80; /* jumpered mode off */
winbond_writecfg(winbond, 0x81, reg);
reg = winbond_readcfg(winbond, 0x83);
reg |= 0xF0; /* local control */
winbond_writecfg(winbond, 0x83, reg);
reg = winbond_readcfg(winbond, 0x85);
reg |= 0xF0; /* programmable timing */
winbond_writecfg(winbond, 0x85, reg);
reg = winbond_readcfg(winbond, 0x81);
if (reg & mask)
return W83759A;
}
if (probe->port == 0x1F0) {
unsigned long flags;
local_irq_save(flags);
/* Probes */
outb(inb(0x1F2) | 0x80, 0x1F2);
inb(0x1F5);
inb(0x1F2);
inb(0x3F6);
inb(0x3F6);
inb(0x1F2);
inb(0x1F2);
if ((inb(0x1F2) & 0x80) == 0) {
/* PDC20230c or 20630 ? */
printk(KERN_INFO "PDC20230-C/20630 VLB ATA controller"
" detected.\n");
udelay(100);
inb(0x1F5);
local_irq_restore(flags);
return PDC20230;
} else {
outb(0x55, 0x1F2);
inb(0x1F2);
inb(0x1F2);
if (inb(0x1F2) == 0x00)
printk(KERN_INFO "PDC20230-B VLB ATA "
"controller detected.\n");
local_irq_restore(flags);
return BIOS;
}
}
if (ht6560a & mask)
return HT6560A;
if (ht6560b & mask)
return HT6560B;
if (opti82c611a & mask)
return OPTI611A;
if (opti82c46x & mask)
return OPTI46X;
if (autospeed & mask)
return SNOOP;
return BIOS;
@ -1085,123 +348,13 @@ static void __init legacy_check_special_cases(struct pci_dev *p, int *primary,
}
}
static __init void probe_opti_vlb(void)
{
/* If an OPTI 82C46X is present find out where the channels are */
static const char *optis[4] = {
"3/463MV", "5MV",
"5MVA", "5MVB"
};
u8 chans = 1;
u8 ctrl = (opti_syscfg(0x30) & 0xC0) >> 6;
opti82c46x = 3; /* Assume master and slave first */
printk(KERN_INFO DRV_NAME ": Opti 82C46%s chipset support.\n",
optis[ctrl]);
if (ctrl == 3)
chans = (opti_syscfg(0x3F) & 0x20) ? 2 : 1;
ctrl = opti_syscfg(0xAC);
/* Check enabled and this port is the 465MV port. On the
MVB we may have two channels */
if (ctrl & 8) {
if (chans == 2) {
legacy_probe_add(0x1F0, 14, OPTI46X, 0);
legacy_probe_add(0x170, 15, OPTI46X, 0);
}
if (ctrl & 4)
legacy_probe_add(0x170, 15, OPTI46X, 0);
else
legacy_probe_add(0x1F0, 14, OPTI46X, 0);
} else
legacy_probe_add(0x1F0, 14, OPTI46X, 0);
}
static __init void qdi65_identify_port(u8 r, u8 res, unsigned long port)
{
static const unsigned long ide_port[2] = { 0x170, 0x1F0 };
/* Check card type */
if ((r & 0xF0) == 0xC0) {
/* QD6500: single channel */
if (r & 8)
/* Disabled ? */
return;
legacy_probe_add(ide_port[r & 0x01], 14 + (r & 0x01),
QDI6500, port);
}
if (((r & 0xF0) == 0xA0) || (r & 0xF0) == 0x50) {
/* QD6580: dual channel */
if (!request_region(port + 2 , 2, "pata_qdi")) {
release_region(port, 2);
return;
}
res = inb(port + 3);
/* Single channel mode ? */
if (res & 1)
legacy_probe_add(ide_port[r & 0x01], 14 + (r & 0x01),
QDI6580, port);
else { /* Dual channel mode */
legacy_probe_add(0x1F0, 14, QDI6580DP, port);
/* port + 0x02, r & 0x04 */
legacy_probe_add(0x170, 15, QDI6580DP, port + 2);
}
release_region(port + 2, 2);
}
}
static __init void probe_qdi_vlb(void)
{
unsigned long flags;
static const unsigned long qd_port[2] = { 0x30, 0xB0 };
int i;
/*
* Check each possible QD65xx base address
*/
for (i = 0; i < 2; i++) {
unsigned long port = qd_port[i];
u8 r, res;
if (request_region(port, 2, "pata_qdi")) {
/* Check for a card */
local_irq_save(flags);
/* I have no h/w that needs this delay but it
is present in the historic code */
r = inb(port);
udelay(1);
outb(0x19, port);
udelay(1);
res = inb(port);
udelay(1);
outb(r, port);
udelay(1);
local_irq_restore(flags);
/* Fail */
if (res == 0x19) {
release_region(port, 2);
continue;
}
/* Passes the presence test */
r = inb(port + 1);
udelay(1);
/* Check port agrees with port set */
if ((r & 2) >> 1 == i)
qdi65_identify_port(r, res, port);
release_region(port, 2);
}
}
}
/**
* legacy_init - attach legacy interfaces
*
* Attach legacy IDE interfaces by scanning the usual IRQ/port suspects.
* Right now we do not scan the ide0 and ide1 address but should do so
* for non PCI systems or systems with no PCI IDE legacy mode devices.
* If you fix that note there are special cases to consider like VLB
* drivers and CS5510/20.
* If you fix that note there are special cases to consider like CS5510/20.
*/
static __init int legacy_init(void)
@ -1235,27 +388,19 @@ static __init int legacy_init(void)
pci_present = 1;
}
if (winbond == 1)
winbond = 0x130; /* Default port, alt is 1B0 */
if (primary == 0 || all)
legacy_probe_add(0x1F0, 14, UNKNOWN, 0);
if (secondary == 0 || all)
legacy_probe_add(0x170, 15, UNKNOWN, 0);
if (probe_all || !pci_present) {
/* ISA/VLB extra ports */
/* ISA extra ports */
legacy_probe_add(0x1E8, 11, UNKNOWN, 0);
legacy_probe_add(0x168, 10, UNKNOWN, 0);
legacy_probe_add(0x1E0, 8, UNKNOWN, 0);
legacy_probe_add(0x160, 12, UNKNOWN, 0);
}
if (opti82c46x)
probe_opti_vlb();
if (qdi)
probe_qdi_vlb();
for (i = 0; i < NR_HOST; i++, pl++) {
if (pl->port == 0)
continue;
@ -1287,8 +432,6 @@ MODULE_AUTHOR("Alan Cox");
MODULE_DESCRIPTION("low-level driver for legacy ATA");
MODULE_LICENSE("GPL");
MODULE_VERSION(DRV_VERSION);
MODULE_ALIAS("pata_qdi");
MODULE_ALIAS("pata_winbond");
module_init(legacy_init);
module_exit(legacy_exit);

View file

@ -26,10 +26,7 @@ enum {
ATA_MAX_DEVICES = 2, /* per bus/port */
ATA_MAX_PRD = 256, /* we could make these 256/256 */
ATA_SECT_SIZE = 512,
ATA_MAX_SECTORS_128 = 128,
ATA_MAX_SECTORS = 256,
ATA_MAX_SECTORS_1024 = 1024,
ATA_MAX_SECTORS_8191 = 8191,
ATA_MAX_SECTORS_LBA48 = 65535,/* avoid count to be 0000h */
ATA_MAX_SECTORS_TAPE = 65535,
ATA_MAX_TRIM_RNUM = 64, /* 512-byte payload / (6-byte LBA + 2-byte range per entry) */

View file

@ -46,13 +46,12 @@
/*
* Quirk flags bits.
* ata_device->quirks is an unsigned int, so __ATA_QUIRK_MAX must not exceed 32.
* ata_device->quirks is an u64, so __ATA_QUIRK_MAX must not exceed 64.
*/
enum ata_quirks {
__ATA_QUIRK_DIAGNOSTIC, /* Failed boot diag */
__ATA_QUIRK_NODMA, /* DMA problems */
__ATA_QUIRK_NONCQ, /* Don't use NCQ */
__ATA_QUIRK_MAX_SEC_128, /* Limit max sects to 128 */
__ATA_QUIRK_BROKEN_HPA, /* Broken HPA */
__ATA_QUIRK_DISABLE, /* Disable it */
__ATA_QUIRK_HPA_SIZE, /* Native size off by one */
@ -74,8 +73,7 @@ enum ata_quirks {
__ATA_QUIRK_ZERO_AFTER_TRIM, /* Guarantees zero after trim */
__ATA_QUIRK_NO_DMA_LOG, /* Do not use DMA for log read */
__ATA_QUIRK_NOTRIM, /* Do not use TRIM */
__ATA_QUIRK_MAX_SEC_1024, /* Limit max sects to 1024 */
__ATA_QUIRK_MAX_SEC_8191, /* Limit max sects to 8191 */
__ATA_QUIRK_MAX_SEC, /* Limit max sectors */
__ATA_QUIRK_MAX_TRIM_128M, /* Limit max trim size to 128M */
__ATA_QUIRK_NO_NCQ_ON_ATI, /* Disable NCQ on ATI chipset */
__ATA_QUIRK_NO_LPM_ON_ATI, /* Disable LPM on ATI chipset */
@ -91,38 +89,36 @@ enum ata_quirks {
* Some quirks may be drive/controller pair dependent.
*/
enum {
ATA_QUIRK_DIAGNOSTIC = (1U << __ATA_QUIRK_DIAGNOSTIC),
ATA_QUIRK_NODMA = (1U << __ATA_QUIRK_NODMA),
ATA_QUIRK_NONCQ = (1U << __ATA_QUIRK_NONCQ),
ATA_QUIRK_MAX_SEC_128 = (1U << __ATA_QUIRK_MAX_SEC_128),
ATA_QUIRK_BROKEN_HPA = (1U << __ATA_QUIRK_BROKEN_HPA),
ATA_QUIRK_DISABLE = (1U << __ATA_QUIRK_DISABLE),
ATA_QUIRK_HPA_SIZE = (1U << __ATA_QUIRK_HPA_SIZE),
ATA_QUIRK_IVB = (1U << __ATA_QUIRK_IVB),
ATA_QUIRK_STUCK_ERR = (1U << __ATA_QUIRK_STUCK_ERR),
ATA_QUIRK_BRIDGE_OK = (1U << __ATA_QUIRK_BRIDGE_OK),
ATA_QUIRK_ATAPI_MOD16_DMA = (1U << __ATA_QUIRK_ATAPI_MOD16_DMA),
ATA_QUIRK_FIRMWARE_WARN = (1U << __ATA_QUIRK_FIRMWARE_WARN),
ATA_QUIRK_1_5_GBPS = (1U << __ATA_QUIRK_1_5_GBPS),
ATA_QUIRK_NOSETXFER = (1U << __ATA_QUIRK_NOSETXFER),
ATA_QUIRK_BROKEN_FPDMA_AA = (1U << __ATA_QUIRK_BROKEN_FPDMA_AA),
ATA_QUIRK_DUMP_ID = (1U << __ATA_QUIRK_DUMP_ID),
ATA_QUIRK_MAX_SEC_LBA48 = (1U << __ATA_QUIRK_MAX_SEC_LBA48),
ATA_QUIRK_ATAPI_DMADIR = (1U << __ATA_QUIRK_ATAPI_DMADIR),
ATA_QUIRK_NO_NCQ_TRIM = (1U << __ATA_QUIRK_NO_NCQ_TRIM),
ATA_QUIRK_NOLPM = (1U << __ATA_QUIRK_NOLPM),
ATA_QUIRK_WD_BROKEN_LPM = (1U << __ATA_QUIRK_WD_BROKEN_LPM),
ATA_QUIRK_ZERO_AFTER_TRIM = (1U << __ATA_QUIRK_ZERO_AFTER_TRIM),
ATA_QUIRK_NO_DMA_LOG = (1U << __ATA_QUIRK_NO_DMA_LOG),
ATA_QUIRK_NOTRIM = (1U << __ATA_QUIRK_NOTRIM),
ATA_QUIRK_MAX_SEC_1024 = (1U << __ATA_QUIRK_MAX_SEC_1024),
ATA_QUIRK_MAX_SEC_8191 = (1U << __ATA_QUIRK_MAX_SEC_8191),
ATA_QUIRK_MAX_TRIM_128M = (1U << __ATA_QUIRK_MAX_TRIM_128M),
ATA_QUIRK_NO_NCQ_ON_ATI = (1U << __ATA_QUIRK_NO_NCQ_ON_ATI),
ATA_QUIRK_NO_LPM_ON_ATI = (1U << __ATA_QUIRK_NO_LPM_ON_ATI),
ATA_QUIRK_NO_ID_DEV_LOG = (1U << __ATA_QUIRK_NO_ID_DEV_LOG),
ATA_QUIRK_NO_LOG_DIR = (1U << __ATA_QUIRK_NO_LOG_DIR),
ATA_QUIRK_NO_FUA = (1U << __ATA_QUIRK_NO_FUA),
ATA_QUIRK_DIAGNOSTIC = BIT_ULL(__ATA_QUIRK_DIAGNOSTIC),
ATA_QUIRK_NODMA = BIT_ULL(__ATA_QUIRK_NODMA),
ATA_QUIRK_NONCQ = BIT_ULL(__ATA_QUIRK_NONCQ),
ATA_QUIRK_BROKEN_HPA = BIT_ULL(__ATA_QUIRK_BROKEN_HPA),
ATA_QUIRK_DISABLE = BIT_ULL(__ATA_QUIRK_DISABLE),
ATA_QUIRK_HPA_SIZE = BIT_ULL(__ATA_QUIRK_HPA_SIZE),
ATA_QUIRK_IVB = BIT_ULL(__ATA_QUIRK_IVB),
ATA_QUIRK_STUCK_ERR = BIT_ULL(__ATA_QUIRK_STUCK_ERR),
ATA_QUIRK_BRIDGE_OK = BIT_ULL(__ATA_QUIRK_BRIDGE_OK),
ATA_QUIRK_ATAPI_MOD16_DMA = BIT_ULL(__ATA_QUIRK_ATAPI_MOD16_DMA),
ATA_QUIRK_FIRMWARE_WARN = BIT_ULL(__ATA_QUIRK_FIRMWARE_WARN),
ATA_QUIRK_1_5_GBPS = BIT_ULL(__ATA_QUIRK_1_5_GBPS),
ATA_QUIRK_NOSETXFER = BIT_ULL(__ATA_QUIRK_NOSETXFER),
ATA_QUIRK_BROKEN_FPDMA_AA = BIT_ULL(__ATA_QUIRK_BROKEN_FPDMA_AA),
ATA_QUIRK_DUMP_ID = BIT_ULL(__ATA_QUIRK_DUMP_ID),
ATA_QUIRK_MAX_SEC_LBA48 = BIT_ULL(__ATA_QUIRK_MAX_SEC_LBA48),
ATA_QUIRK_ATAPI_DMADIR = BIT_ULL(__ATA_QUIRK_ATAPI_DMADIR),
ATA_QUIRK_NO_NCQ_TRIM = BIT_ULL(__ATA_QUIRK_NO_NCQ_TRIM),
ATA_QUIRK_NOLPM = BIT_ULL(__ATA_QUIRK_NOLPM),
ATA_QUIRK_WD_BROKEN_LPM = BIT_ULL(__ATA_QUIRK_WD_BROKEN_LPM),
ATA_QUIRK_ZERO_AFTER_TRIM = BIT_ULL(__ATA_QUIRK_ZERO_AFTER_TRIM),
ATA_QUIRK_NO_DMA_LOG = BIT_ULL(__ATA_QUIRK_NO_DMA_LOG),
ATA_QUIRK_NOTRIM = BIT_ULL(__ATA_QUIRK_NOTRIM),
ATA_QUIRK_MAX_SEC = BIT_ULL(__ATA_QUIRK_MAX_SEC),
ATA_QUIRK_MAX_TRIM_128M = BIT_ULL(__ATA_QUIRK_MAX_TRIM_128M),
ATA_QUIRK_NO_NCQ_ON_ATI = BIT_ULL(__ATA_QUIRK_NO_NCQ_ON_ATI),
ATA_QUIRK_NO_LPM_ON_ATI = BIT_ULL(__ATA_QUIRK_NO_LPM_ON_ATI),
ATA_QUIRK_NO_ID_DEV_LOG = BIT_ULL(__ATA_QUIRK_NO_ID_DEV_LOG),
ATA_QUIRK_NO_LOG_DIR = BIT_ULL(__ATA_QUIRK_NO_LOG_DIR),
ATA_QUIRK_NO_FUA = BIT_ULL(__ATA_QUIRK_NO_FUA),
};
enum {
@ -723,7 +719,7 @@ struct ata_cdl {
struct ata_device {
struct ata_link *link;
unsigned int devno; /* 0 or 1 */
unsigned int quirks; /* List of broken features */
u64 quirks; /* List of broken features */
unsigned long flags; /* ATA_DFLAG_xxx */
struct scsi_device *sdev; /* attached SCSI device */
void *private_data;
@ -903,6 +899,9 @@ struct ata_port {
u64 qc_active;
int nr_active_links; /* #links with active qcs */
struct work_struct deferred_qc_work;
struct ata_queued_cmd *deferred_qc;
struct ata_link link; /* host default link */
struct ata_link *slave_link; /* see ata_slave_link_init() */