LAN9500 supports tx checksum offload, which slightly decreases cpu
utilisation. The benefit isn't very large because we still require
the skb to be linearized, but it does save a few cycles.
This patch adds support for it, and enables it by default.
Signed-off-by: Steve Glendinning <steve.glendinning@smsc.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Randy Dunlap found that SFC_MTD was selected when sfc was built-in and
the MTD core was a module. Don't allow that combination.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Acked-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch is to kill directly reference of netdev->priv too.
Because the private data should be allocated in DMA area, alloc_etherdev()
can't satisfy this needs.
Use netdev->ml_priv to point to lance_private.
Signed-off-by: Wang Chen <wangchen@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, if multiple unicast addresses are programmed into a
mv643xx_eth interface, the core networking will resort to enabling
promiscuous mode on the interface, as mv643xx_eth does not implement
->set_rx_mode().
This patch switches mv643xx_eth over from ->set_multicast_list()
to ->set_rx_mode(), and implements support for secondary unicast
addresses. The hardware can handle multiple unicast addresses as
long as their first 11 nibbles are the same (i.e. are of the form
xx:xx:xx:xx:xx:xy where the x part is the same for all addresses), so
if that is the case, we use that mode. If it's not the case, we enable
unicast promiscuous mode in the hardware, which is slightly better than
enabling promiscuous mode for multicasts as well, which is what would
happen before.
While we are at it, change the programming sequence so that we
don't clear all filter bits first, so we don't lose all incoming
packets while the filter is being reprogrammed.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since txq_alloc_desc_index() is a very simple function, and since
descriptor ring index handling for transmit reclaim, receive
processing and receive refill is already handled inline as well,
inline txq_alloc_desc_index() into its two call sites.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The mv643xx_eth driver uses the rdl()/wrl() macros to read and
write hardware registers. Per-port registers are accessed in the
following way:
#define PORT_STATUS(p) (0x0444 + ((p) << 10))
[...]
static inline u32 rdl(struct mv643xx_eth_private *mp, int offset)
{
return readl(mp->shared->base + offset);
}
[...]
port_status = rdl(mp, PORT_STATUS(mp->port_num));
By giving the per-port 'struct mv643xx_eth_private' its own
'void __iomem *base' pointer that points to the per-port register
area, we can get rid of both the double indirection and the << 10
that is done for every per-port register access -- this patch does
that.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix up a couple of coding style issues caught by checkpatch.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Removes the use of a hardcoded sram_size, determine string_spec
location from the MCP header instead.
Signed-off-by: Brice Goglin <brice@myri.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Properly attribute transmit and receive drops by incrementing the
per-slice counter.
Signed-off-by: Brice Goglin <brice@myri.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Mike Frysinger <vapier.adi@gmail.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since all the memory, which pointed by netdev->priv, are allocated in
advance instead of by alloc_netdev(). Use netdev->ml_priv to point to
those memory.
Signed-off-by: Wang Chen <wangchen@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
1. When alloc_etherdev(), no memory be allocated to netdev->priv.
2. And it's need to get a whole page for priv.
For these reasons, use netdev->ml_priv to point to the page is the
best method to convert directly reference of netdev->priv.
Signed-off-by: Wang Chen <wangchen@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When suspending the device the ring structure is freed which causes it to
loose track of the count. To resolve this we need to move the ring count
outside of the ring structure and store it in the adapter struct.
In addition to resolving the suspend/resume issue this patch also addresses
issues seen in the event of memory allocation errors causing uneven ring
sizes on multiple queues.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This update replaces the xchg calls that were added with a pair of
assignments as there is no need for the xchg calls and they were found to
cause issues on some architectures.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds the 82576 device to the description for igb in Kconfig.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops. Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops. Compile tested only (give me hw!)
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops. Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops. Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops. Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops. Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert to new network device ops interface. Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert to new network device ops interface. Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops. Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert to new network device ops interface.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops. Compile teseted only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops. Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops. Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert this driver to network device ops. Compile tested only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert the TUN/TAP tunnel driver to net_device_ops.
Split the ops in two, and retain compatability.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert to new network device ops interface.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert to new network device ops interface.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert to new network device ops interface. Slight additional complexity
here because the second port does not allow netpoll and therefore has
different virtual function table.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert to net_device_ops table.
Note: for some operations move error checking into generic networking
layer (rather than looking at pointers in bonding).
A couple of gratituous style cleanups to get rid of extra {}
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert to net_device_ops function table.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert to net_device_ops function table.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert to new network device ops interface.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
First device to convert over is the loopback device.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In order for the network device ops get_stats call to be immutable, the handling
of the default internal network device stats block has to be changed. Add a new
helper function which replaces the old use of internal_get_stats.
Note: change return code to make it clear that the caller should not
go changing the returned statistics.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (27 commits)
rtnetlink: propagate error from dev_change_flags in do_setlink()
isdn: remove extra byteswap in isdn_net_ciscohdlck_slarp_send_reply
Phonet: refuse to send bigger than MTU packets
e1000e: fix IPMI traffic
e1000e: fix warn_on reload after phy_id error
phy: fix phy address bug
e100: fix dma error in direction for mapping
igb: use dev_printk instead of printk
qla3xxx: Cleanup: Fix link print statements.
igb: Use device_set_wakeup_enable
e1000: Use device_set_wakeup_enable
e1000e: Use device_set_wakeup_enable
via-velocity: enable perfect filtering for multicast packets
phy: Add support for Marvell 88E1118 PHY
mlx4_en: Pause parameters per port
phylib: fix premature freeing of struct mii_bus
atl1: Do not enumerate options unsupported by chip
atl1e: fix broken multicast by removing unnecessary crc inversion
gianfar: Fix DMA unmap invocations
net/ucc_geth: Fix oops in uec_get_ethtool_stats()
...
Several netdev share one adapter here.
We use netdev->ml_priv of the netdevs point to the first netdev's priv.
Signed-off-by: Wang Chen <wangchen@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If segmentation offload is enabled by the host, we currently allocate
maximum sized packet buffers and pass them to the host. This uses up
20 ring entries, allowing us to supply only 20 packet buffers to the
host with a 256 entry ring. This is a huge overhead when receiving
small packets, and is most keenly felt when receiving MTU sized
packets from off-host.
The VIRTIO_NET_F_MRG_RXBUF feature flag is set by hosts which support
using receive buffers which are smaller than the maximum packet size.
In order to transfer large packets to the guest, the host merges
together multiple receive buffers to form a larger logical buffer.
The number of merged buffers is returned to the guest via a field in
the virtio_net_hdr.
Make use of this support by supplying single page receive buffers to
the host. On receive, we extract the virtio_net_hdr, copy 128 bytes of
the payload to the skb's linear data buffer and adjust the fragment
offset to point to the remaining data. This ensures proper alignment
and allows us to not use any paged data for small packets. If the
payload occupies multiple pages, we simply append those pages as
fragments and free the associated skbs.
This scheme allows us to be efficient in our use of ring entries
while still supporting large packets. Benchmarking using netperf from
an external machine to a guest over a 10Gb/s network shows a 100%
improvement from ~1Gb/s to ~2Gb/s. With a local host->guest benchmark
with GSO disabled on the host side, throughput was seen to increase
from 700Mb/s to 1.7Gb/s.
Based on a patch from Herbert Xu.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (use netdev_priv)
Signed-off-by: David S. Miller <davem@davemloft.net>
Seems like an oversight that we have set-tx-csum and set-sg hooked
up, but not set-tso.
Also leads to the strange situation that if you e.g. disable tx-csum,
then tso doesn't get disabled.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Each time we re-fill the recv queue with buffers, we allocate
one too many skbs and free it again when adding fails. We should
recycle the pages allocated in this case.
A previous version of this patch made trim_pages() trim trailing
unused pages from skbs with some paged data, but this actually
caused a barely measurable slowdown.
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (use netdev_priv)
Signed-off-by: David S. Miller <davem@davemloft.net>