summaryrefslogtreecommitdiff
path: root/drivers
Commit message (Collapse)AuthorAge
* dmaengine: remove 'bigref' infrastructureDan Williams2009-01-06
| | | | | | | | | | Reference counting is done at the module level so clients need not worry that a channel will leave while they are actively using dmaengine. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: kill struct dma_client and supporting infrastructureDan Williams2009-01-06
| | | | | | | | | | | | All users have been converted to either the general-purpose allocator, dma_find_channel, or dma_request_channel. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: replace dma_async_client_register with dmaengine_getDan Williams2009-01-06
| | | | | | | | | | Now that clients no longer need to be notified of channel arrival dma_async_client_register can simply increment the dmaengine_ref_count. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* atmel-mci: convert to dma_request_channel and down-level dma_slaveDan Williams2009-01-06
| | | | | | | | | | | dma_request_channel provides an exclusive channel, so we no longer need to pass slave data through dmaengine. Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmatest: convert to dma_request_channelDan Williams2009-01-06
| | | | | | | | | | | | | | | | | Replace the client registration infrastructure with a custom loop to poll for channels. Once dma_request_channel returns NULL stop asking for channels. A userspace side effect of this change if that loading the dmatest module before loading a dma driver will result in no channels being found, previously dmatest would get a callback. To facilitate testing in the built-in case dmatest_init is marked as a late_initcall. Another side effect is that channels under test can not be used for any other purpose. Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: introduce dma_request_channel and private channelsDan Williams2009-01-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This interface is primarily for device-to-memory clients which need to search for dma channels with platform-specific characteristics. The prototype is: struct dma_chan *dma_request_channel(dma_cap_mask_t mask, dma_filter_fn filter_fn, void *filter_param); When the optional 'filter_fn' parameter is set to NULL dma_request_channel simply returns the first channel that satisfies the capability mask. Otherwise, when the mask parameter is insufficient for specifying the necessary channel, the filter_fn routine can be used to disposition the available channels in the system. The filter_fn routine is called once for each free channel in the system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags that channel to be the return value from dma_request_channel. A channel allocated via this interface is exclusive to the caller, until dma_release_channel() is called. To ensure that all channels are not consumed by the general-purpose allocator the DMA_PRIVATE capability is provided to exclude a dma_device from general-purpose (memory-to-memory) consideration. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: provide a common 'issue_pending_all' implementationDan Williams2009-01-06
| | | | | | | | | | | | | | | async_tx and net_dma each have open-coded versions of issue_pending_all, so provide a common routine in dmaengine. The implementation needs to walk the global device list, so implement rcu to allow dma_issue_pending_all to run lockless. Clients protect themselves from channel removal events by holding a dmaengine reference. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: centralize channel allocation, introduce dma_find_channelDan Williams2009-01-06
| | | | | | | | | | | | | | | | | | | Allowing multiple clients to each define their own channel allocation scheme quickly leads to a pathological situation. For memory-to-memory offload all clients can share a central allocator. This simply moves the existing async_tx allocator to dmaengine with minimal fixups: * async_tx.c:get_chan_ref_by_cap --> dmaengine.c:nth_chan * async_tx.c:async_tx_rebalance --> dmaengine.c:dma_channel_rebalance * split out common code from async_tx.c:__async_tx_find_channel --> dma_find_channel Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: up-level reference counting to the module levelDan Williams2009-01-06
| | | | | | | | | | | | | | | | | | | | | | | | | Simply, if a client wants any dmaengine channel then prevent all dmaengine modules from being removed. Once the clients are done re-enable module removal. Why?, beyond reducing complication: 1/ Tracking reference counts per-transaction in an efficient manner, as is currently done, requires a complicated scheme to avoid cache-line bouncing effects. 2/ Per-transaction ref-counting gives the false impression that a dma-driver can be gracefully removed ahead of its user (net, md, or dma-slave) 3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but if such an engine were built one day we still would not need to notify clients of remove events. The driver can simply return NULL to a ->prep() request, something that is much easier for a client to handle. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: remove dependency on async_txDan Williams2009-01-05
| | | | | | | | | | | | | | async_tx.ko is a consumer of dma channels. A circular dependency arises if modules in drivers/dma rely on common code in async_tx.ko. It prevents either module from being unloaded. Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o where they should have been from the beginning. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* rtc: add alarm/update irq interfacesAlessandro Zummo2009-01-04
| | | | | | | | | | | | | | | | Add standard interfaces for alarm/update irqs enabling. Drivers are no more required to implement equivalent ioctl code as rtc-dev will provide it. UIE emulation should now be handled correctly and will work even for those RTC drivers who cannot be configured to do both UIE and AIE. Signed-off-by: Alessandro Zummo <a.zummo@towertech.it> Cc: David Brownell <david-b@pacbell.net> Cc: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* viafb: fix crashes due to 4k stack overflowBruno Prémont2009-01-04
| | | | | | | | | | | | | | | | | | The function viafb_cursor() uses 2 stack-variables of CURSOR_SIZE bits; CURSOR_SIZE is defined as (8 * 1024). Using up twice 1k on stack is too much for 4k-stack (though it works with 8k-stacks). Make those two variables kzalloc'ed to preserve stack space. Also merge the whole lot of local struct's in viafb_ioctl into a union so the stack usage gets minimized here as well. (struct's are only accessed in their indicidual IOCTL case) This second part is only compile-tested as I know of no userspace app using the IOCTLs. Signed-off-by: Bruno Prémont <bonbons@linux-vserver.org> Cc: <JosephChan@via.com.tw> Cc: Krzysztof Helt <krzysztof.h1@poczta.fm> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'cpus4096-for-linus-3' of ↵Linus Torvalds2009-01-03
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'cpus4096-for-linus-3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (77 commits) x86: setup_per_cpu_areas() cleanup cpumask: fix compile error when CONFIG_NR_CPUS is not defined cpumask: use alloc_cpumask_var_node where appropriate cpumask: convert shared_cpu_map in acpi_processor* structs to cpumask_var_t x86: use cpumask_var_t in acpi/boot.c x86: cleanup some remaining usages of NR_CPUS where s/b nr_cpu_ids sched: put back some stack hog changes that were undone in kernel/sched.c x86: enable cpus display of kernel_max and offlined cpus ia64: cpumask fix for is_affinity_mask_valid() cpumask: convert RCU implementations, fix xtensa: define __fls mn10300: define __fls m32r: define __fls h8300: define __fls frv: define __fls cris: define __fls cpumask: CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS cpumask: zero extra bits in alloc_cpumask_var_node cpumask: replace for_each_cpu_mask_nr with for_each_cpu in kernel/time/ cpumask: convert mm/ ...
| * cpumask: fix compile error when CONFIG_NR_CPUS is not definedMike Travis2009-01-03
| | | | | | | | | | | | | | | | CONFIG_NR_CPUS will be defined for all arch's whether SMP or not, but it may not have made it into all arches yet. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * cpumask: convert shared_cpu_map in acpi_processor* structs to cpumask_var_tRusty Russell2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: Reduce memory usage, use new API. This is part of an effort to reduce structure sizes for machines configured with large NR_CPUS. cpumask_t gets replaced by cpumask_var_t, which is either struct cpumask[1] (small NR_CPUS) or struct cpumask * (large NR_CPUS). (Changes to powernow-k* by <travis>.) Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * Merge branch 'master' of ↵Mike Travis2009-01-03
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-cpumask into merge-rr-cpumask Conflicts: arch/x86/kernel/io_apic.c kernel/rcuclassic.c kernel/sched.c kernel/time/tick-sched.c Signed-off-by: Mike Travis <travis@sgi.com> [ mingo@elte.hu: backmerged typo fix for io_apic.c ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * percpu: fix percpu accessors to potentially !cpu_possible() cpus: pnpbiosRusty Russell2009-01-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: CPU iterator bugfixes Percpu areas are only allocated for possible cpus. In general, you shouldn't access random cpu's percpu areas. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Adam Belay <ambx1@neo.rr.com>
| | * Merge branch 'master' of ↵Rusty Russell2008-12-31
| | |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 Conflicts: arch/x86/kernel/io_apic.c
| | * | cpumask: use new cpumask API in drivers/infiniband/hw/ipathRusty Russell2008-12-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup We're moving from handing around cpumask_t's to handing around struct cpumask *'s. cpus_*, cpumask_t and cpu_*_map are deprecated: convert to cpumask_*, cpu_*_mask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Ralph Campbell <infinipath@qlogic.com>
| | * | cpumask: use new cpumask API in drivers/infiniband/hw/ehcaRusty Russell2008-12-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup We're moving from handing around cpumask_t's to handing around struct cpumask *'s. cpus_*, cpumask_t and cpu_*_map are deprecated: convert to cpumask_*, cpu_*_mask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Tested-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Cc: Christoph Raisch <raisch@de.ibm.com>
| | * | cpumask: use for_each_online_cpu() in drivers/infiniband/hw/ehca/ehca_irq.cRusty Russell2008-12-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup In future, accessing cpu numbers beyond nr_cpu_ids (the runtime limit) will be undefined. We can avoid future problems by using for_each_online_cpu() here. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Tested-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Cc: Christoph Raisch <raisch@de.ibm.com>
| | * | Merge branch 'master' of ↵Rusty Russell2008-12-30
| | |\ \ | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6
| | * | | cpumask: add sysfs displays for configured and disabled cpu mapsMike Travis2008-12-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: add new sysfs files. Add sysfs files "kernel_max" and "offline" to display the max CPU index allowed (NR_CPUS-1), and the map of cpus that are offline. Cpus can be offlined via HOTPLUG, disabled by the BIOS ACPI tables, or if they exceed the number of cpus allowed by the NR_CPUS config option, or the "maxcpus=NUM" kernel start parameter. The "possible_cpus=NUM" parameter can also extend the number of possible cpus allowed, in which case the cpus not present at startup will be in the offline state. (These cpus can be HOTPLUGGED ON after system startup [pending a follow-on patch to provide the capability via the /sys/devices/sys/cpu/cpuN/online mechanism to bring them online.]) By design, the "offlined cpus > possible cpus" display will always use the following formats: * all possible cpus online: "x$" or "x-y$" * some possible cpus offline: ".*,x$" or ".*,x-y$" where: x == number of possible cpus (nr_cpu_ids); and y == number of cpus >= NR_CPUS or maxcpus (if y > x). One use of this feature is for distros to select (or configure) the appropriate kernel to install for the resident system. Notes: * cpus offlined <= possible cpus will be printed for all architectures. * cpus offlined > possible cpus will only be printed for arches that set 'total_cpus' [X86 only in this patch]. Based on tip/cpus4096 + .../rusty/linux-2.6-for-ingo.git/master + x86-only-patches sent 12/15. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* | | | | Merge branch 'for-linus' of ↵Linus Torvalds2009-01-03
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux-2.6-iommu * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux-2.6-iommu: (89 commits) AMD IOMMU: remove now unnecessary #ifdefs AMD IOMMU: prealloc_protection_domains should be static kvm/iommu: fix compile warning AMD IOMMU: add statistics about total number of map requests AMD IOMMU: add statistics about allocated io memory AMD IOMMU: add stats counter for domain tlb flushes AMD IOMMU: add stats counter for single iommu domain tlb flushes AMD IOMMU: add stats counter for cross-page request AMD IOMMU: add stats counter for free_coherent requests AMD IOMMU: add stats counter for alloc_coherent requests AMD IOMMU: add stats counter for unmap_sg requests AMD IOMMU: add stats counter for map_sg requests AMD IOMMU: add stats counter for unmap_single requests AMD IOMMU: add stats counter for map_single requests AMD IOMMU: add stats counter for completion wait events AMD IOMMU: add init code for statistic collection AMD IOMMU: add necessary header defines for stats counting AMD IOMMU: add Kconfig entry for statistic collection code AMD IOMMU: use dev_name in iommu_enable function AMD IOMMU: use calc_devid in prealloc_protection_domains ...
| * | | | | intel-iommu: fix bit shift at DOMAIN_FLAG_P2P_MULTIPLE_DEVICESMike Day2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Mike Day <ncmike@ncultra.org> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: remove now unused intel_iommu_found functionJoerg Roedel2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: register functions for the IOMMU APIJoerg Roedel2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: adapt domain iova_to_phys function for IOMMU APIJoerg Roedel2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: adapt domain map and unmap functions for IOMMU APIJoerg Roedel2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: adapt device attach and detach functions for IOMMU APIJoerg Roedel2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: adapt domain init and destroy functions for IOMMU APIJoerg Roedel2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | select IOMMU_API when DMAR and/or AMD_IOMMU is selectedJoerg Roedel2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These two IOMMUs can implement the current version of this API. So select the API if one or both of these IOMMU drivers is selected. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | add frontend implementation for the IOMMU APIJoerg Roedel2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This API can be used by KVM for accessing different types of IOMMUs to do device passthrough to guests. Beside that this API can also be used by device drivers to map non-linear host memory into dma-linear addresses to prevent sgather-gather DMA. UIO may be another user for this API. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
| * | | | | Check agaw is sufficient for mapped memoryWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When domain is related to multiple iommus, need to check if the minimum agaw is sufficient for the mapped memory Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Change intel iommu APIs of virtual machine domainWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These APIs are used by KVM to use VT-d Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Change domain_context_mapping_one for virtual machine domainWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | vm_domid won't be set in context, find available domain id for a device from its iommu. For a virtual machine domain, a default agaw will be set, and skip top levels of page tables for iommu which has less agaw than default. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Allocation and free functions of virtual machine domainWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | virtual machine domain is different from native DMA-API domain, implement separate allocation and free functions for virtual machine domain. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Add domain_flush_cacheWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because virtual machine domain may have multiple devices from different iommus, it cannot use __iommu_flush_cache. In some common low level functions, use domain_flush_cache instead of __iommu_flush_cache. On the other hand, in some functions, iommu can is specified or domain cannot be got, still use __iommu_flush_cache Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Add/remove domain device info for virtual machine domainWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add iommu reference count in domain, and add a lock to protect iommu setting including iommu_bmp, iommu_count and iommu_coherency. virtual machine domain may have multiple devices from different iommus, so it needs to do more things when add/remove domain device info. Thus implement separate these functions for virtual machine domain. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Add domain flag DOMAIN_FLAG_VIRTUAL_MACHINEWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add this flag for VT-d used in virtual machine, like KVM. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | iommu coherencyWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In dmar_domain, more than one iommus may be included in iommu_bmp. Due to "Coherency" capability may be different across iommus, set this variable to indicate iommu access is coherent or not. Only when all related iommus in a dmar_domain are all coherent, iommu access of this domain is coherent. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | calculate agaw for each iommuWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "SAGAW" capability may be different across iommus. Use a default agaw, but if default agaw is not supported in some iommus, choose a less supported agaw. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | iommu bitmap instead of iommu pointer in dmar_domainWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to support assigning multiple devices from different iommus to a domain, iommu bitmap is used to keep all iommus the domain are related to. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Get iommu from g_iommus for deferred flushWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | deferred_flush[] uses the iommu seq_id to index, so its iommu is fixed and can get it from g_iommus. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Add global iommu listWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | change P2P domain flagsWeidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Initialize domain flags to 0Weidong Han2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's random number after the domain is allocated by kmem_cache_alloc Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: fix segment number being ignored when searching DRHDYu Zhao2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On platforms with multiple PCI segments, any of the segments can have a DRHD with INCLUDE_PCI_ALL flag. So need to check the DRHD's segment number against the PCI device's when searching its DRHD. Signed-off-by: Yu Zhao <yu.zhao@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | intel-iommu: trivially inline DMA PTE macrosMark McLoughlin2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * | | | | intel-iommu: trivially inline context entry macrosMark McLoughlin2009-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some macros were unused, so I just dropped them: context_fault_disable context_translation_type context_address_root context_address_width context_domain_id Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>