US20130329555A1 - Dual counter - Google Patents

Dual counter Download PDF

Info

Publication number
US20130329555A1
US20130329555A1 US13/911,999 US201313911999A US2013329555A1 US 20130329555 A1 US20130329555 A1 US 20130329555A1 US 201313911999 A US201313911999 A US 201313911999A US 2013329555 A1 US2013329555 A1 US 2013329555A1
Authority
US
United States
Prior art keywords
counter
packet
integrated circuit
logic unit
data traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/911,999
Inventor
Jay Patel
Michael J. Miller
Michael Morrison
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peraso Inc
Original Assignee
Mosys Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mosys Inc filed Critical Mosys Inc
Priority to US13/912,033 priority Critical patent/US9667546B2/en
Priority to US13/911,999 priority patent/US20130329555A1/en
Publication of US20130329555A1 publication Critical patent/US20130329555A1/en
Priority to US14/503,382 priority patent/US11221764B2/en
Assigned to MOSYS, INC. reassignment MOSYS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MILLER, MICHAEL, PATEL, JAY, MORRISON, MICHAEL
Assigned to INGALLS & SNYDER LLC reassignment INGALLS & SNYDER LLC SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOSYS, INC.
Assigned to INGALLS & SNYDER LLC reassignment INGALLS & SNYDER LLC RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: PERASO INC. F/K/A MOSYS, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/12Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor
    • G06F13/124Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor where hardware is a sequential transfer control unit, e.g. microprocessor, peripheral processor or state-machine
    • G06F13/128Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor where hardware is a sequential transfer control unit, e.g. microprocessor, peripheral processor or state-machine for dedicated transfers to a network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/3005Arrangements for executing specific machine instructions to perform operations for flow control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping

Definitions

  • Packet processors have specific features or architectures that are provided to enhance and optimize packet processing within these networks.
  • a memory device with an embedded logic unit is disclosed.
  • a packet processing acceleration device also referred to as a bandwidth engine that integrates high density memory, a high speed interface, and an arithmetic logic unit is disclosed.
  • the bandwidth engine receives signals external to the bandwidth engine including one or more instructions.
  • the bandwidth engine also comprises a programmable instruction memory, a programmable configuration memory, and in some embodiments a plurality of counters. Additionally, embodiments in accordance with the present invention can be described as follows:
  • FIG. 1 is a diagram of an example of a memory system including a bandwidth engine, in accordance with various embodiments.
  • FIG. 2 is a diagram of an example packet processor coupled to a plurality of bandwidth engines, in accordance with various embodiments.
  • FIG. 3 is a block diagram of an example metering logic unit, in accordance with various embodiments.
  • FIG. 4 is a block diagram of an example traffic conditioner, in accordance with various embodiments.
  • FIG. 5 is a block diagram of an example bandwidth engine, in accordance with various embodiments.
  • FIG. 6 is a block diagram of an example bandwidth engine, in accordance with various embodiments.
  • FIG. 7 is a diagram of an example programmable partitionable counter, according to various embodiments.
  • FIG. 8 is a block diagram of an example bandwidth engine, in accordance with various embodiments.
  • FIG. 9 illustrates a flow diagram of an example method of partitioning and incrementing a counter in a memory, in accordance with various embodiments.
  • FIG. 10 is a block diagram of a system used in accordance one embodiment.
  • the electronic computing device manipulates and transforms data represented as physical (electronic) quantities within the electronic computing device's processors, registers, and/or memories into other data similarly represented as physical quantities within the electronic computing device's memories, registers and/or other such information storage, processing, transmission, or/or display components of the electronic computing device or other electronic computing device(s).
  • Example techniques, devices, systems, and methods for a memory system comprising a memory with an embedded logic unit are described herein.
  • Discussion begins with a high level description of a memory system comprising a memory with an embedded logic unit, also known as a “bandwidth engine.”
  • Example methods of governing network traffic are then described, including metering, policing and shaping.
  • Discussion continues with a description of the memory system performing a plurality of operations in response to a single command.
  • a memory system that comprises a programmable partitionable counter is then described.
  • an example method of use is described.
  • an example computer environment is described.
  • FIG. 1 shows a block diagram of a memory system 100 in accordance with one embodiment of the present invention.
  • Memory system 100 comprises bandwidth engine 105 and is located on an integrated circuit chip.
  • Bandwidth engine 105 comprises memory core 110 and logic unit 120 .
  • Bandwidth engine 105 may be used for data that is accessed often.
  • bandwidth engine 105 is configured to receive packets from a second apparatus via a communication path.
  • the second apparatus is packet processor 150 .
  • Packet processor 150 may be any type of processor including, but not limited to: a central processing unit, a microprocessor, a graphics processing unit, a physics processing unit, a digital signal processor, a network processor, a front end processor, a coprocessor, a data processor, an audio processor, a multi core processor, an ASIC, a system-on-chip (SoC), a structured ASIC, an FPGA, etc.
  • Packet processor 150 is capable of performing commands on external memory. As an example, packet processor 150 may fetch data from a memory block, perform a command using the data received from the memory block, write back to the memory block, and then send a command which updates the memory block's internal counters.
  • Bandwidth engine 105 contains an internal arithmetic logic unit (ALU or logic unit) 120 such that when packet processor 150 attempts to perform an operation using data located in memory core 110 , rather than sending the data directly from memory core 110 to packet processor 150 to perform a command, logic unit 120 can perform a command, update the internal counter of memory core 110 , send data to memory core 110 , and/or send data to packet processor 150 .
  • memory core 110 may be partitioned memory.
  • Logic unit 120 can access, modify and retire networking operations wholly within bandwidth engine 105 .
  • Logic unit 120 supports statistics, counters and other applications, and reduces the latency and bandwidth requirements for macro operations.
  • Latency is reduced with bandwidth engine 105 since a packet processor 150 does not need to send and receive as much data because of embedded logic unit 120 of bandwidth engine 105 . Additionally, more bandwidth is available because bandwidth engine 105 does more work locally. In some embodiments, a plurality of commands may be performed within bandwidth engine 105 after bandwidth engine 105 receives a single command from packet processor 150 . As a result, the amount of packet processing operations by packet processor 150 is reduced. In particular, packet processor 150 is not required to perform identical processing commands because it offloads some processing to bandwidth engine 105 .
  • memory core 110 may be serial or parallel accessed.
  • memory core 110 may comprise memory construction including, but not limited to: SRAM, DRAM, embedded DRAM, 1T-SRAM, Quad Data Rate SRAM, RLDRAM, or Flash as examples.
  • packet processor 150 may be coupled to a plurality of memory systems 100 , as shown in FIG. 2 .
  • FIG. 2 shows a packet processor coupled to bandwidth engines 150 A, 105 B, 105 C . . . 105 n , 105 n +1 and 105 n +2.
  • bandwidth engine 105 uses a serial interface using an open industry available serial protocol, which is optimized for chip-to-chip communications.
  • Bandwidth engine 105 also provides an interface for serializer/deserializer (SerDes) technology (i.e., functional blocks that convert data between serial data and parallel data interfaces in each direction).
  • SerDes serializer/deserializer
  • Bandwidth engine 105 includes timer 140 that sends a signal into logic unit 120 and metering logic unit 160 .
  • Bandwidth engine 105 can include a plurality of timers 140 .
  • Timers 140 can determine the amount of time that has passed since a particular event. For example, timers 140 may determine the amount of time that has passed since a record or information about a user/account was accessed.
  • Bandwidth engine 105 includes lookup table 130 .
  • lookup table 130 provides logic unit 120 and/or metering logic unit 160 with actions that should be performed on the traffic.
  • lookup table 130 may look up whether a bucket contains sufficient tokens and whether that traffic may be passed. The bandwidth engine then returns a metering result (e.g., whether that traffic may be passed) to the host ASIC/packet processor 150 .
  • the lookup table 130 may be configured to carry out a plurality of procedures.
  • Metering logic unit 160 in some embodiments, runs in parallel with logic unit 120 and, as discussed below, meters and marks traffic to ensure compliance with a traffic contract. Note that in some embodiments, bandwidth engine 105 does not comprise a metering logic unit 160 . In some embodiments, the logic unit 120 performs operations related to metering traffic, marking traffic, and sending signals to packet processor 150 or traffic conditioner 170 .
  • Packet processor 150 includes traffic conditioner 170 that, as discussed below, shapes and drops packets in response to signals/recommendations received by metering logic unit 160 or logic unit 120 . In some embodiments, packet processor 150 may disregard incoming data from bandwidth engine 105 that recommends whether packet processor 150 should send, delay or drop a packet.
  • Embodiments of the present invention provide for metering data packets.
  • Metering is the process of governing network traffic for compliance with a traffic contract and taking steps to enforce that contract.
  • a traffic contract is similar to a service level agreement with a broadband network.
  • networks employ an asynchronous transfer mode (ATM).
  • ATM asynchronous transfer mode
  • a service or application wishes to use a broadband network to transport traffic, it must first inform the network about what kind of traffic is to be transported, and in some cases the performance requirements of the traffic.
  • bandwidth available packets may be sent.
  • the time of day or the network congestion may be used to determine whether packets may be sent. In other words, the decision to send a packet may be based on the time of day or the congestion on a given network.
  • a network may require an application wishing to send traffic over the network to indicate the type of service required, traffic parameters of the data flow in each direction, and/or the quality of service parameters requested in each direction.
  • Services include, but are not limited to: the constant bit rate, the real-time variable bit rate, the non-real-time variable bit rate, the applicable bit rate and the unspecified bit rate.
  • Policing refers to dropping or disregarding traffic.
  • Coloring refers to marking traffic as compliant, semi-compliant, or in violation/non-compliant.
  • shaping refers to rate limiting, or delaying traffic to bring packets into compliance with a traffic policy.
  • FIG. 3 shows a metering logic unit 160 .
  • Metering logic unit 160 comprises a meter 310 to determine whether the incoming traffic flow complies with a traffic contract.
  • Meter 310 may determine whether a packet is compliant, semi-compliant, or in violation of a contract. As discussed above, in some embodiments there is no metering logic unit 160 , in which case logic unit 120 performs the operations described herein as being performed by metering logic unit 160 .
  • meter 310 determines the status of a packet based on its “color.” It should be understood by those skilled in the art that a color comprises at least one bit comprising a particular value. As an example, a packet may be marked as green, yellow or red. Green typically indicates that a packet is compliant.
  • green may indicate that a packet does not exceed a committed burst size. Yellow typically indicates that a packet is not compliant, but is not in violation of the traffic contract. In one embodiment, yellow indicates that a packet exceeds the committed burst size, but does not exceed an excess burst size. Red typically indicates that the packet is in violation of the traffic contract.
  • metering logic unit 160 further comprises marker 320 that can mark, or re-mark, a packet based on information provided by the meter 310 .
  • the marker 320 changes the color of a packet from one color to a different color.
  • packet processor 150 comprises traffic conditioner 170 that comprises a shaper/dropper 410 as shown in FIG. 4 .
  • Shaper/dropper 410 ensures the traffic sent by packet processor 150 complies with the traffic contract. In some cases, compliance may be enforced by policing (dropping packets). In some embodiments, a token bucket is used. In some embodiments, metered packets are stored in a FIFO buffer/queue until they can be transmitted in compliance with the traffic contract. It should be understood that policing and shaping may occur concurrently.
  • packet processor 150 /traffic conditioner 170 may disregard the recommendation and either send, delay or drop a packet regardless of the data sent by metering logic unit 160 or logic unit 120 .
  • metering is performed using at least one token bucket.
  • metering logic unit 160 When metering logic unit 160 is determining whether a packet is in compliance with a traffic contract, it will determine whether a bucket contains sufficient tokens to send the packet. Bucket tokens may be credited or debited. If a bucket contains a sufficient number of tokens such that a packet is in compliance, the appropriate number of tokens (which may be equivalent to the length of a packet in bytes) are removed, or debited/charged, from the token bucket and the packet is sent. In one embodiment, the packet is colored based on whether sufficient tokens are available.
  • a leaky bucket may be employed in some embodiments. In general, a leaky bucket is used to check that data transmissions, in the form of packets, conform to defined limits on bandwidth and burstiness (a measure of the unevenness or variations in the traffic flow).
  • FIG. 5 shows a bandwidth engine 105 , in accordance with one embodiment.
  • Bandwidth engine 105 may comprise a write buffer 510 coupled to memory core 110 and bypass block 520 .
  • the write buffer 510 comprises data to be written to the memory. If an address (sometimes referred to as a user identification (UID)) is received by memory core 110 while write buffer 510 has data waiting to be written to the same address/UID, the write buffer sends the data waiting to be written to the bypass block 520 .
  • UID user identification
  • FIGS. 6 and 8 depict embodiments of a bandwidth engine.
  • Statistics may comprise a number of bytes received and/or a number of packets received.
  • a packet, received at packet processor 150 may contain statistics such as, but not limited to, a size, an address (or user identification), and a command.
  • bandwidth engine 105 which contains a counter address and byte count value.
  • the bandwidth engine 105 reads from the memory core 110 the contents of the counters (e.g., byte and packet) at the address. The counters are then incremented accordingly. For example, packet counter is incremented by 1 and byte counter is incremented by the number of bytes of the packet.
  • a packet counter 610 B comprises the number of packets received by a memory core 110 .
  • a byte counter 610 A comprises the number of bytes received by a memory core 110 .
  • packet counters are comprised of life time counters, which store enough data such that the counter will continue to operate for the lifetime of the chip (e.g., 64b, 128b, etc.). In other words, a lifetime counter will not overflow during the lifetime of the system in which it is embedded.
  • counters smaller than lifetime counters include, but are not limited to: 32 bits, 64 bits, etc.
  • counter 610 B may be designed to be reset monthly, yearly, etc.
  • a counter (byte counter 610 A, packet counter 610 B or both) is unique to a user, an account, a group of users, etc.
  • a packet received by packet processor 150 may be any size, for example the packets may be variable sizes.
  • the commands send to the bandwidth engine, from the packet processor may comprise 72 bits, wherein the command comprises 64 bits of data and 8 bits of error correcting code.
  • at least one error correcting code module (e.g., error correcting code module 650 and error correcting code module 651 ) is comprised in memory core 110 .
  • counters 610 are embedded in a memory chip. In one embodiment, counters 610 are remote from a memory chip.
  • statistics are sent back to packet processor 150 every time they change. Statistics may be sent to packet processor 150 at predetermined intervals (e.g., every n times, once a day, once a month, etc.). In one embodiment, statistics are not sent to packet processor 150 .
  • a plurality of functions are performed by logic unit 120 . This is sometimes referred to as a dual operations or dual updating. For example, if one piece of information is received such as packet size, then the byte counter 610 A and the packet counter 610 B may be changed. In other words, the packet counter 6108 is changed based on the receipt of a packet size which is used in updating byte counter 610 A. It should be noted that in some embodiments byte counter 610 A and packet counter 6108 may refer to byte counter partition 710 and packet counter partition 720 respectively (both of FIG. 7 ).
  • the size of a word can be any size including, but not limited to: 64 bits, 128 bits, etc.
  • Incrementer 630 may increment the packet counter 610 B. It should be understood that incrementer 630 is a characterization of an action performed by logic unit 120 . In one embodiment, incrementer 630 increments the packet counter by one bit. In various embodiments, incrementer 630 increments the packet counter by: a static number of bits greater than one, a programmable number of bits greater than one, a number of bits based on other parameters or statistics including packet size, or a negative number of bits. In one embodiment, counters may be incremented once every memory clock cycle. Counters may be incremented every system clock cycle. Counters may be incremented sequentially or simultaneously.
  • two reads and two writes are performed on 72b words (e.g., 64b+8b ECC) for a total of four commands and 4 ⁇ 72b data transferred between the processor and bandwidth engine.
  • this method transfers a single 72b command from the processor to the bandwidth engine. This can improve bandwidth by as much as 5 times, as compared to conventional systems. It is noted that memory bandwidth bottleneck between processors and memory may be limiting of producing 400 Gps and 1Tbps network packet processors. Also, statistics can be as much as 25 to 30% of the chip memory bandwidth.
  • Counters may be partitioned. In other words it may be split into multiple partitions once, twice, three times, four times, or more.
  • counter 610 C may be split into a byte counter partition 710 and a packet counter partition 720 .
  • FIG. 7 shows a programmable partitionable counter 610 C comprising 64 bits.
  • a user may program a partition to separate counter 610 C into byte counter partition 710 and packet counter partition 720 at any position he or she chooses.
  • a user may want to partition the programmable partitionable counter 610 C such that the packet counter partition 720 comprises 16 bits while the byte counter partition comprises 48 bits.
  • FIG. 7 shows dashed lines indicating example partitions that occur between bits 15 and 16 , 31 and 32 , and 47 and 48 . It should be noted that a partition may be smaller than 16 bits.
  • the type of account or network contract a user has determines the size and number of partitions comprised in programmable partitionable counter 610 C.
  • Partitions may be any size, and may be variable.
  • programmable partitionable counter 610 C may be programmed to split into partitions as small as 1 bit. The counters may be a fixed size so the size of a partition value will not be small, such as ATM, packets.
  • Programmable partitionable counter 610 C may be programmed with four partitions of size s (e.g., s0, s1, s2 and s3) prior to system operation or implementation.
  • the s value is not stored in bandwidth engine 105 , and instead stored in slower memory.
  • each user/account comprises only one associated s value.
  • An s value is associated with a system as opposed to a user/account.
  • Programming entry commands e (e.g., e0, e1, e2 and e3) correspond with the s sizes.
  • an entry (e) may be used for a particular user/account that corresponds to an s value.
  • the packet counter partition 710 will comprise (32+( ⁇ 16)) bits, while the byte counter partition 720 will comprise (32+(16)) bits.
  • programmable partitionable counter 610 C may be partitioned into partitions as small as one bit (e.g., packet counter partition 720 may comprise 4 bits, while byte counter partition 710 comprises 60 bits).
  • the sum of the bits in the partitions is 64. That is, for example, one partition includes 12 bits and another partition includes 52 bits for a total of 64 bits. Accordingly, the partitions can slide back forth to any desired granularity or setting.
  • one account/user may have a different s than another account/user.
  • an entry may be for multiple users or partitions. Multiple users may be grouped together based on the tasks they perform.
  • a counter may saturate when it reaches its maximum value.
  • a user/account may receive a message indicating that the counter 610 is saturated and the memory core 110 cannot receive additional packets.
  • the user/account may receive a message indicating that if additional money is paid, memory core 110 will continue to function.
  • an address (sometimes referred to as a user identification (UID)) for a user/account is received so memory core 110 finds the location in the memory of the packet counter 610 B and the byte counter 610 A.
  • the memory core 110 may find additional information associated with a user/account. For instance while an address may point to one location in the memory core 110 , the address in addition to the offset will point to another location in the memory core 110 where additional information is stored (e.g., information related to metering).
  • the programmable partitionable counter 610 C may consolidate four operations into one because operations are often paired, or in other words they are often performed at the same time.
  • the byte count is read, then the byte count is written, and the packet count may be read, and then the packet count is written.
  • an address comes in for a record, or a paired counter.
  • an address comes in, and a double word which is 144 bits is modified based on the address and the offset and stored in memory core 110 .
  • flow diagram 900 illustrates example procedures used by various embodiments.
  • Flow diagram 900 includes processes and operations that, in various embodiments, are carried out by one or more of the devices illustrated in FIGS. 1-8 or via computer system 1000 or components thereof.
  • flow diagram 900 Although specific procedures are disclosed in flow diagram 900 , such procedures are examples. That is, embodiments are well suited to performing various other operations or variations of the operations recited in the processes of flow diagram 900 . Likewise, in some embodiments, the operations in flow diagram 900 may be performed in an order different than presented, not all of the operations described in one or more of these flow diagrams may be performed, and/or one or more additional operation may be added.
  • flow diagram 900 illustrates example procedures used by various embodiments.
  • Flow diagram 900 includes some procedures that, in various embodiments, are carried out by a processor under the control of computer-readable and computer-executable instructions.
  • procedures described herein and in conjunction with flow diagram 900 are or may be implemented using a computer (e.g., computer system 1000 ), in various embodiments.
  • the computer-readable and computer-executable instructions can reside in any tangible computer readable storage media, such as, for example, in data storage features such as RAM (e.g., SRAM, DRAM, Flash, embedded DRAM, EPROM, EEPROM, etc.), ROM, and/or storage device.
  • the computer-readable and computer-executable instructions which reside on tangible computer readable storage media, are used to control or operate in conjunction with, for example, one or some combination of processor, or other similar processor(s).
  • processor or other similar processor(s).
  • specific procedures are disclosed in flow diagram 900 , such procedures are examples. That is, embodiments are well suited to performing various other procedures or variations of the procedures recited in flow diagram 900 .
  • the procedures in flow diagram 900 may be performed in an order different than presented and/or not all of the procedures described in one or more procedures described in FIG. 9 may be performed. It is further appreciated that one or more procedures described in flow diagram 900 may be implemented in hardware, or a combination of hardware and firmware, or a combination of hardware and software running thereon.
  • FIG. 9 is a flow diagram 900 of an example method of partitioning and incrementing a counter in a memory, in accordance with an embodiment. Reference will be made to elements of FIGS. 1 , 6 , 7 and 8 to facilitate the explanation of the operations of the method of flow diagram 900 .
  • the method of flow diagram 900 describes the use of programmable partitionable counters 610 C and a bandwidth engine 105 that performs a plurality of operations based on one command.
  • an entry value is programmed wherein the entry value corresponds with the size of a partition.
  • An entry value (e.g., e0, e1, e2 or e3) is programmed and corresponds with the size of a partition (e.g., s0, s1, s2 or s3).
  • a counter 610 C is partitioned based on the size (s value) of the partition.
  • the entries must be programmed in an order such that if the first entry corresponds with the first partition of bits of a counter, the second entry corresponds with the next partition of bits in the counter, and so on.
  • the memory receives a packet.
  • memory core 110 can receive a packet sent from either the logic unit 120 , the write buffer 510 , or the packet processor 150 .
  • functions are performed on the values in the partition counter 610 C based on the packet comprising operations 950 and 960 .
  • logic unit 120 updates the statistics (e.g., values in the counters).
  • a bit/bits are added to a byte counter partition 710 in a partition of the partitioned counter 610 C.
  • a packet typically contains the packet size.
  • the packet size is added to the byte counter partition 710 within programmable partitionable counter 610 C in some embodiments. In some embodiments, the packet size is added to a byte counter 610 A.
  • bits are added to a packet counter partition 720 in the partitioned counter 610 C based on adding bits to a byte counter partition 710 .
  • a dual operation may occur where instead of sending two commands to add to both byte counter partition 710 and packet counter partition 720 , a dual operation adds to the packet counter partition 720 whenever the byte counter partition 710 changes.
  • the packet counter partition 720 may be incremented by 1 bit, more than 1 bit, or a number of bits based on the packet.
  • the values in the partitioned counter are merged together.
  • programmable merge module 810 gathers values from a plurality of counters and sends them to memory partition 640 .
  • errors are corrected on a merged partitioned counter value. This operation occurs at error correction code modules 650 .
  • error correcting may occur elsewhere in the bandwidth engine. For example, error correcting may occur at various times/places including, but not limited to: after data is read from the memory core 110 , when bandwidth engine 105 receives a packet, before bandwidth engine 105 sends a packet, before an operation is performed on data in the logic unit 120 or metering logic unit 160 , after an operation is performed on data in the logic unit 120 or metering logic unit 160 , before data enters memory core 110 , etc.
  • the merged partitioned counter values are written to the memory partition 640 .
  • the values may be sent to packet processor 150 .
  • FIG. 10 illustrates one example of a type of computer (computer system 1000 ) that can be used in accordance with or to implement various embodiments which are discussed herein.
  • computer system 1000 of FIG. 10 is an example and that embodiments as described herein can operate on or within a number of different computer systems including, but not limited to, general purpose networked computer systems, embedded computer systems, routers, switches, server devices, client devices, various intermediate devices/nodes, stand alone computer systems, media centers, handheld computer systems, multi-media devices, and the like.
  • Computer system 1000 of FIG. 10 is well adapted to having peripheral tangible computer-readable storage media 1002 such as, for example, a floppy disk, a compact disc, digital versatile disc, other disc based storage, universal serial bus “thumb” drive, removable memory card, and the like coupled thereto.
  • peripheral tangible computer-readable storage media 1002 such as, for example, a floppy disk, a compact disc, digital versatile disc, other disc based storage, universal serial bus “thumb” drive, removable memory card, and the like coupled thereto.
  • the tangible computer-readable storage media is non-transitory in nature.
  • System 1000 of FIG. 10 includes an address/data bus 1004 for communicating information, and a processor 1006 A coupled with bus 1004 for processing information and instructions. As depicted in FIG. 10 , system 1000 is also well suited to a multi-processor environment in which a plurality of processors 1006 A, 1006 B, and 1006 C are present. Conversely, system 1000 is also well suited to having a single processor such as, for example, processor 1006 A. Processors 1006 A, 10068 , and 1006 C may be any of various types of microprocessors.
  • System 1000 also includes data storage features such as a computer usable volatile memory 1008 , e.g., random access memory (RAM), coupled with bus 1004 for storing information and instructions for processors 1006 A, 10068 , and 1006 C.
  • System 1000 also includes computer usable non-volatile memory 1010 , e.g., read only memory (ROM), coupled with bus 1004 for storing static information and instructions for processors 1006 A, 10068 , and 1006 C.
  • a data storage unit 1012 e.g., a magnetic or optical disk and disk drive
  • System 1000 may also include an alphanumeric input device 1014 including alphanumeric and function keys coupled with bus 1004 for communicating information and command selections to processor 1006 A or processors 1006 A, 10068 , and 1006 C.
  • System 1000 may also include cursor control device 1016 coupled with bus 1004 for communicating user input information and command selections to processor 1006 A or processors 1006 A, 1006 B, and 1006 C.
  • system 1000 may also include display device 1018 coupled with bus 1004 for displaying information.
  • display device 1018 of FIG. 10 when included, may be a liquid crystal device, cathode ray tube, plasma display device or other display device suitable for creating graphic images and alphanumeric characters recognizable to a user.
  • Cursor control device 1016 when included, allows the computer user to dynamically signal the movement of a visible symbol (cursor) on a display screen of display device 1018 and indicate user selections of selectable items displayed on display device 1018 .
  • cursor control device 1016 are known in the art including a trackball, mouse, touch pad, joystick or special keys on alphanumeric input device 1014 capable of signaling movement of a given direction or manner of displacement.
  • a cursor can be directed and/or activated via input from alphanumeric input device 1014 using special keys and key sequence commands.
  • System 1000 is also well suited to having a cursor directed by other means such as, for example, voice commands.
  • System 1000 also includes an I/O device 1020 for coupling system 1000 with external entities.
  • I/O device 1020 is a modem for enabling wired or wireless communications between system 1000 and an external network such as, but not limited to, the Internet.
  • I/O device 1020 uses SerDes technology.
  • an operating system 1022 applications 1024 , modules 1026 , and data 1028 are shown as typically residing in one or some combination of computer usable volatile memory 1008 (e.g., RAM), computer usable non-volatile memory 1010 (e.g., ROM), and data storage unit 1012 .
  • computer usable volatile memory 1008 e.g., RAM
  • computer usable non-volatile memory 1010 e.g., ROM
  • data storage unit 1012 e.g., all or portions of various embodiments described herein are stored, for example, as an application 1024 and/or module 1026 in memory locations within RAM 1008 , computer-readable storage media within data storage unit 1012 , peripheral computer-readable storage media 1002 , and/or other tangible computer-readable storage media.

Abstract

An integrated circuit device for receiving packets. The integrated circuit device includes a first counter for counting a number of the packets, and a second counter for counting bytes of the packets. The first counter and the second counter are configured to be incremented by a single command from a packet processor.

Description

    CROSS-REFERENCE TO RELATED U.S. APPLICATIONS
  • This application claims priority to and benefit of co-pending U.S. Patent Application No. 61/656,377, filed on Jun. 6, 2012, entitled, “MEMORY DEVICE WITH AN EMBEDDED LOGIC UNIT,” by Tang et al., having Attorney Docket No. MP-1234.PRO, and assigned to the assignee of the present application.
  • This Application is related to U.S. patent application Ser. No. ______, filed on ______, entitled “HIGH UTILIZATION MULTI-PARTITIONED MEMORY WITH WRITE CACHE, BIST, AND STATISTICS FUNCTIONS,” by Miller et al., having Attorney Docket No. MP-1237, and assigned to the assignee of the present application.
  • BACKGROUND
  • In modern communication networks data is transferred using a formatted unit of data referred to as a packet. When data is formatted into packets, the bitrate of the communication medium is better shared among users than if the network were circuit switched. Packet processors have specific features or architectures that are provided to enhance and optimize packet processing within these networks.
  • DISCLOSURE
  • A memory device with an embedded logic unit is disclosed. In particular, a packet processing acceleration device, also referred to as a bandwidth engine that integrates high density memory, a high speed interface, and an arithmetic logic unit is disclosed. The bandwidth engine receives signals external to the bandwidth engine including one or more instructions. The bandwidth engine also comprises a programmable instruction memory, a programmable configuration memory, and in some embodiments a plurality of counters. Additionally, embodiments in accordance with the present invention can be described as follows:
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and form a part of this application, illustrate embodiments of the present invention, and together with the description, serve to explain the principles of the invention. Unless noted, the drawings referred to in this description should be understood as not being drawn to scale. It should be noted that a break in a line in the drawings referred to in this description signifies that a line and the perpendicular line(s) crossing it do not connect.
  • FIG. 1 is a diagram of an example of a memory system including a bandwidth engine, in accordance with various embodiments.
  • FIG. 2 is a diagram of an example packet processor coupled to a plurality of bandwidth engines, in accordance with various embodiments.
  • FIG. 3 is a block diagram of an example metering logic unit, in accordance with various embodiments.
  • FIG. 4 is a block diagram of an example traffic conditioner, in accordance with various embodiments.
  • FIG. 5 is a block diagram of an example bandwidth engine, in accordance with various embodiments.
  • FIG. 6 is a block diagram of an example bandwidth engine, in accordance with various embodiments.
  • FIG. 7 is a diagram of an example programmable partitionable counter, according to various embodiments.
  • FIG. 8 is a block diagram of an example bandwidth engine, in accordance with various embodiments.
  • FIG. 9 illustrates a flow diagram of an example method of partitioning and incrementing a counter in a memory, in accordance with various embodiments.
  • FIG. 10 is a block diagram of a system used in accordance one embodiment.
  • DESCRIPTION OF EMBODIMENTS
  • Reference will now be made in detail to various embodiments, examples of which are illustrated in the accompanying drawings. While the subject matter will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the subject matter to these embodiments. Furthermore, in the following description, numerous specific details are set forth in order to provide a thorough understanding of the subject matter. In other instances, conventional methods, procedures, objects, and circuits have not been described in detail as not to unnecessarily obscure aspects of the subject matter.
  • NOTATION AND NOMENCLATURE
  • Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present Description of Embodiments, discussions utilizing terms such as “programming,” “partitioning,” “receiving,” “performing,” “adding,” “merging,” “correcting,” “writing,” or the like, refer to the actions and processes of a computer system or similar electronic computing device (or portion thereof) such as, but not limited to: an electronic control module, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), and/or a management system (or portion thereof). The electronic computing device manipulates and transforms data represented as physical (electronic) quantities within the electronic computing device's processors, registers, and/or memories into other data similarly represented as physical quantities within the electronic computing device's memories, registers and/or other such information storage, processing, transmission, or/or display components of the electronic computing device or other electronic computing device(s).
  • Overview of Discussion
  • Example techniques, devices, systems, and methods for a memory system comprising a memory with an embedded logic unit are described herein. Discussion begins with a high level description of a memory system comprising a memory with an embedded logic unit, also known as a “bandwidth engine.” Example methods of governing network traffic are then described, including metering, policing and shaping. Discussion continues with a description of the memory system performing a plurality of operations in response to a single command. A memory system that comprises a programmable partitionable counter is then described. Next, an example method of use is described. Lastly, an example computer environment is described.
  • Bandwidth Engine Overview
  • FIG. 1 shows a block diagram of a memory system 100 in accordance with one embodiment of the present invention. Memory system 100 comprises bandwidth engine 105 and is located on an integrated circuit chip. Bandwidth engine 105 comprises memory core 110 and logic unit 120. Bandwidth engine 105 may be used for data that is accessed often.
  • In various embodiments, bandwidth engine 105 is configured to receive packets from a second apparatus via a communication path. In some embodiments, the second apparatus is packet processor 150. Packet processor 150 may be any type of processor including, but not limited to: a central processing unit, a microprocessor, a graphics processing unit, a physics processing unit, a digital signal processor, a network processor, a front end processor, a coprocessor, a data processor, an audio processor, a multi core processor, an ASIC, a system-on-chip (SoC), a structured ASIC, an FPGA, etc.
  • Packet processor 150 is capable of performing commands on external memory. As an example, packet processor 150 may fetch data from a memory block, perform a command using the data received from the memory block, write back to the memory block, and then send a command which updates the memory block's internal counters.
  • Bandwidth engine 105 contains an internal arithmetic logic unit (ALU or logic unit) 120 such that when packet processor 150 attempts to perform an operation using data located in memory core 110, rather than sending the data directly from memory core 110 to packet processor 150 to perform a command, logic unit 120 can perform a command, update the internal counter of memory core 110, send data to memory core 110, and/or send data to packet processor 150. In some embodiments, memory core 110 may be partitioned memory. Logic unit 120 can access, modify and retire networking operations wholly within bandwidth engine 105. Logic unit 120 supports statistics, counters and other applications, and reduces the latency and bandwidth requirements for macro operations.
  • Latency is reduced with bandwidth engine 105 since a packet processor 150 does not need to send and receive as much data because of embedded logic unit 120 of bandwidth engine 105. Additionally, more bandwidth is available because bandwidth engine 105 does more work locally. In some embodiments, a plurality of commands may be performed within bandwidth engine 105 after bandwidth engine 105 receives a single command from packet processor 150. As a result, the amount of packet processing operations by packet processor 150 is reduced. In particular, packet processor 150 is not required to perform identical processing commands because it offloads some processing to bandwidth engine 105.
  • In various embodiments, memory core 110 may be serial or parallel accessed. In some embodiments, memory core 110 may comprise memory construction including, but not limited to: SRAM, DRAM, embedded DRAM, 1T-SRAM, Quad Data Rate SRAM, RLDRAM, or Flash as examples. Also, in some embodiments packet processor 150 may be coupled to a plurality of memory systems 100, as shown in FIG. 2. FIG. 2 shows a packet processor coupled to bandwidth engines 150A, 105B, 105C . . . 105 n, 105 n+1 and 105 n+2.
  • In some embodiments, bandwidth engine 105 uses a serial interface using an open industry available serial protocol, which is optimized for chip-to-chip communications. Bandwidth engine 105 also provides an interface for serializer/deserializer (SerDes) technology (i.e., functional blocks that convert data between serial data and parallel data interfaces in each direction).
  • Bandwidth engine 105 includes timer 140 that sends a signal into logic unit 120 and metering logic unit 160. Bandwidth engine 105 can include a plurality of timers 140. Timers 140 can determine the amount of time that has passed since a particular event. For example, timers 140 may determine the amount of time that has passed since a record or information about a user/account was accessed.
  • Bandwidth engine 105 includes lookup table 130. In one embodiment, as traffic passes through logic unit 120 lookup table 130 provides logic unit 120 and/or metering logic unit 160 with actions that should be performed on the traffic. In one embodiment, when either logic unit 120 or metering logic unit 160 receives a packet, lookup table 130 may look up whether a bucket contains sufficient tokens and whether that traffic may be passed. The bandwidth engine then returns a metering result (e.g., whether that traffic may be passed) to the host ASIC/packet processor 150. In some embodiments, the lookup table 130 may be configured to carry out a plurality of procedures.
  • Metering logic unit 160, in some embodiments, runs in parallel with logic unit 120 and, as discussed below, meters and marks traffic to ensure compliance with a traffic contract. Note that in some embodiments, bandwidth engine 105 does not comprise a metering logic unit 160. In some embodiments, the logic unit 120 performs operations related to metering traffic, marking traffic, and sending signals to packet processor 150 or traffic conditioner 170.
  • Packet processor 150 includes traffic conditioner 170 that, as discussed below, shapes and drops packets in response to signals/recommendations received by metering logic unit 160 or logic unit 120. In some embodiments, packet processor 150 may disregard incoming data from bandwidth engine 105 that recommends whether packet processor 150 should send, delay or drop a packet.
  • Metering, Policing, and Shaping
  • Embodiments of the present invention provide for metering data packets. Metering is the process of governing network traffic for compliance with a traffic contract and taking steps to enforce that contract.
  • A traffic contract is similar to a service level agreement with a broadband network. In various embodiments, networks employ an asynchronous transfer mode (ATM). When a service or application wishes to use a broadband network to transport traffic, it must first inform the network about what kind of traffic is to be transported, and in some cases the performance requirements of the traffic. In some networks, regardless of the traffic contract, if bandwidth is available packets may be sent. In some networks, the time of day or the network congestion may be used to determine whether packets may be sent. In other words, the decision to send a packet may be based on the time of day or the congestion on a given network. A network may require an application wishing to send traffic over the network to indicate the type of service required, traffic parameters of the data flow in each direction, and/or the quality of service parameters requested in each direction. Services include, but are not limited to: the constant bit rate, the real-time variable bit rate, the non-real-time variable bit rate, the applicable bit rate and the unspecified bit rate.
  • When metering traffic, traffic that is in violation of a traffic contract may be dropped, marked as non-compliant, or left as-is depending on the policies of a network. Policing refers to dropping or disregarding traffic. “Coloring” refers to marking traffic as compliant, semi-compliant, or in violation/non-compliant. Also, shaping refers to rate limiting, or delaying traffic to bring packets into compliance with a traffic policy.
  • FIG. 3 shows a metering logic unit 160.
  • Metering logic unit 160 comprises a meter 310 to determine whether the incoming traffic flow complies with a traffic contract. Meter 310 may determine whether a packet is compliant, semi-compliant, or in violation of a contract. As discussed above, in some embodiments there is no metering logic unit 160, in which case logic unit 120 performs the operations described herein as being performed by metering logic unit 160. In one embodiment, meter 310 determines the status of a packet based on its “color.” It should be understood by those skilled in the art that a color comprises at least one bit comprising a particular value. As an example, a packet may be marked as green, yellow or red. Green typically indicates that a packet is compliant. In one embodiment, green may indicate that a packet does not exceed a committed burst size. Yellow typically indicates that a packet is not compliant, but is not in violation of the traffic contract. In one embodiment, yellow indicates that a packet exceeds the committed burst size, but does not exceed an excess burst size. Red typically indicates that the packet is in violation of the traffic contract.
  • In some embodiments, metering logic unit 160 further comprises marker 320 that can mark, or re-mark, a packet based on information provided by the meter 310. In some embodiments, the marker 320 changes the color of a packet from one color to a different color.
  • In some embodiments, packet processor 150 comprises traffic conditioner 170 that comprises a shaper/dropper 410 as shown in FIG. 4. Shaper/dropper 410 ensures the traffic sent by packet processor 150 complies with the traffic contract. In some cases, compliance may be enforced by policing (dropping packets). In some embodiments, a token bucket is used. In some embodiments, metered packets are stored in a FIFO buffer/queue until they can be transmitted in compliance with the traffic contract. It should be understood that policing and shaping may occur concurrently. Also, as discussed above, in some embodiments after packet processor 150/traffic conditioner 170 receives a recommendation regarding whether to send, delay, or drop a packet, packet processor 150/traffic conditioner 170 may disregard the recommendation and either send, delay or drop a packet regardless of the data sent by metering logic unit 160 or logic unit 120.
  • In some embodiments, metering is performed using at least one token bucket. When metering logic unit 160 is determining whether a packet is in compliance with a traffic contract, it will determine whether a bucket contains sufficient tokens to send the packet. Bucket tokens may be credited or debited. If a bucket contains a sufficient number of tokens such that a packet is in compliance, the appropriate number of tokens (which may be equivalent to the length of a packet in bytes) are removed, or debited/charged, from the token bucket and the packet is sent. In one embodiment, the packet is colored based on whether sufficient tokens are available. In another embodiment, if there are insufficient tokens in a bucket or buckets (e.g., main bucket, sister bucket, etc.) the packet is not in compliance and the contents of the bucket are not charged. Non-compliant packets may be dropped, queued for subsequent transmission when sufficient tokens have accumulated in a bucket, or transmitted after it is marked as in violation/non-compliant. If a packet is marked as in violation, it may be dropped if the network is subsequently overloaded. It should be understood that a leaky bucket may be employed in some embodiments. In general, a leaky bucket is used to check that data transmissions, in the form of packets, conform to defined limits on bandwidth and burstiness (a measure of the unevenness or variations in the traffic flow).
  • FIG. 5 shows a bandwidth engine 105, in accordance with one embodiment. Bandwidth engine 105 may comprise a write buffer 510 coupled to memory core 110 and bypass block 520. The write buffer 510 comprises data to be written to the memory. If an address (sometimes referred to as a user identification (UID)) is received by memory core 110 while write buffer 510 has data waiting to be written to the same address/UID, the write buffer sends the data waiting to be written to the bypass block 520.
  • Performing a Plurality of Operations in Response to a Single Command
  • FIGS. 6 and 8 depict embodiments of a bandwidth engine. In some embodiments, when a packet is received memory core 110, statistics are updated. Statistics may comprise a number of bytes received and/or a number of packets received. A packet, received at packet processor 150, may contain statistics such as, but not limited to, a size, an address (or user identification), and a command. When a packet is received at pack processor 150, a command is sent to bandwidth engine 105 which contains a counter address and byte count value. The bandwidth engine 105 reads from the memory core 110 the contents of the counters (e.g., byte and packet) at the address. The counters are then incremented accordingly. For example, packet counter is incremented by 1 and byte counter is incremented by the number of bytes of the packet.
  • A packet counter 610B comprises the number of packets received by a memory core 110. A byte counter 610A comprises the number of bytes received by a memory core 110. In one embodiment, packet counters are comprised of life time counters, which store enough data such that the counter will continue to operate for the lifetime of the chip (e.g., 64b, 128b, etc.). In other words, a lifetime counter will not overflow during the lifetime of the system in which it is embedded. In some embodiments, counters smaller than lifetime counters include, but are not limited to: 32 bits, 64 bits, etc. For example, counter 610B may be designed to be reset monthly, yearly, etc. In some embodiments, a counter (byte counter 610A, packet counter 610B or both) is unique to a user, an account, a group of users, etc.
  • A packet received by packet processor 150 may be any size, for example the packets may be variable sizes. The commands send to the bandwidth engine, from the packet processor, may comprise 72 bits, wherein the command comprises 64 bits of data and 8 bits of error correcting code. In one embodiment, at least one error correcting code module (e.g., error correcting code module 650 and error correcting code module 651) is comprised in memory core 110. Also, counters 610 are embedded in a memory chip. In one embodiment, counters 610 are remote from a memory chip.
  • In various embodiments, statistics (e.g., the value of packet counters 610B) are sent back to packet processor 150 every time they change. Statistics may be sent to packet processor 150 at predetermined intervals (e.g., every n times, once a day, once a month, etc.). In one embodiment, statistics are not sent to packet processor 150.
  • In various embodiments, after memory core 110 receives a packet with one command, a plurality of functions are performed by logic unit 120. This is sometimes referred to as a dual operations or dual updating. For example, if one piece of information is received such as packet size, then the byte counter 610A and the packet counter 610B may be changed. In other words, the packet counter 6108 is changed based on the receipt of a packet size which is used in updating byte counter 610A. It should be noted that in some embodiments byte counter 610A and packet counter 6108 may refer to byte counter partition 710 and packet counter partition 720 respectively (both of FIG. 7).
  • As an example, when the command PDADD (Paired Double Word Add) is received as a command, the statistics are modified. For example, the operations P_COUNT=P_COUNT+1 and BYTE_COUNT=BYTE_COUNT+PACKET_SIZE are performed by logic unit 120 when a packet is received by memory core 110. Note that the size of a word can be any size including, but not limited to: 64 bits, 128 bits, etc.
  • Incrementer 630 may increment the packet counter 610B. It should be understood that incrementer 630 is a characterization of an action performed by logic unit 120. In one embodiment, incrementer 630 increments the packet counter by one bit. In various embodiments, incrementer 630 increments the packet counter by: a static number of bits greater than one, a programmable number of bits greater than one, a number of bits based on other parameters or statistics including packet size, or a negative number of bits. In one embodiment, counters may be incremented once every memory clock cycle. Counters may be incremented every system clock cycle. Counters may be incremented sequentially or simultaneously.
  • In various embodiments, to increment two lifetime counters (e.g., byte and packet) two reads and two writes are performed on 72b words (e.g., 64b+8b ECC) for a total of four commands and 4×72b data transferred between the processor and bandwidth engine. In particular, this method transfers a single 72b command from the processor to the bandwidth engine. This can improve bandwidth by as much as 5 times, as compared to conventional systems. It is noted that memory bandwidth bottleneck between processors and memory may be limiting of producing 400 Gps and 1Tbps network packet processors. Also, statistics can be as much as 25 to 30% of the chip memory bandwidth.
  • Programmable Partitionable Counter
  • Counters (e.g., byte counter 610A and packet counter 610B) may be partitioned. In other words it may be split into multiple partitions once, twice, three times, four times, or more. For example, counter 610C may be split into a byte counter partition 710 and a packet counter partition 720. FIG. 7 shows a programmable partitionable counter 610C comprising 64 bits. A user may program a partition to separate counter 610C into byte counter partition 710 and packet counter partition 720 at any position he or she chooses. For example, a user may want to partition the programmable partitionable counter 610C such that the packet counter partition 720 comprises 16 bits while the byte counter partition comprises 48 bits. FIG. 7 shows dashed lines indicating example partitions that occur between bits 15 and 16, 31 and 32, and 47 and 48. It should be noted that a partition may be smaller than 16 bits.
  • In one embodiment, the type of account or network contract a user has determines the size and number of partitions comprised in programmable partitionable counter 610C. Partitions may be any size, and may be variable. In some embodiments, programmable partitionable counter 610C may be programmed to split into partitions as small as 1 bit. The counters may be a fixed size so the size of a partition value will not be small, such as ATM, packets.
  • Programmable partitionable counter 610C may be programmed with four partitions of size s (e.g., s0, s1, s2 and s3) prior to system operation or implementation. In some embodiments, the s value is not stored in bandwidth engine 105, and instead stored in slower memory. In one embodiment, each user/account comprises only one associated s value. An s value is associated with a system as opposed to a user/account. Programming entry commands e (e.g., e0, e1, e2 and e3) correspond with the s sizes. In some embodiments, an entry (e) may be used for a particular user/account that corresponds to an s value. In one example, if an e value is entered that corresponds to an s value of −16 in a 64 bit counter, the packet counter partition 710 will comprise (32+(−16)) bits, while the byte counter partition 720 will comprise (32+(16)) bits. Note that programmable partitionable counter 610C may be partitioned into partitions as small as one bit (e.g., packet counter partition 720 may comprise 4 bits, while byte counter partition 710 comprises 60 bits). It should be understood, that in one embodiment, the sum of the bits in the partitions is 64. That is, for example, one partition includes 12 bits and another partition includes 52 bits for a total of 64 bits. Accordingly, the partitions can slide back forth to any desired granularity or setting. In one embodiment, one account/user may have a different s than another account/user. In other embodiments, an entry may be for multiple users or partitions. Multiple users may be grouped together based on the tasks they perform.
  • A counter may saturate when it reaches its maximum value. When a counter reaches its maximum value, in one embodiment, a user/account may receive a message indicating that the counter 610 is saturated and the memory core 110 cannot receive additional packets. The user/account may receive a message indicating that if additional money is paid, memory core 110 will continue to function. In one embodiment, after a counter reaches its maximum value it returns to zero.
  • In an embodiment, an address (sometimes referred to as a user identification (UID)) for a user/account is received so memory core 110 finds the location in the memory of the packet counter 610B and the byte counter 610A. By using an offset the memory core 110 may find additional information associated with a user/account. For instance while an address may point to one location in the memory core 110, the address in addition to the offset will point to another location in the memory core 110 where additional information is stored (e.g., information related to metering).
  • In one embodiment, the programmable partitionable counter 610C may consolidate four operations into one because operations are often paired, or in other words they are often performed at the same time. In an embodiment, the byte count is read, then the byte count is written, and the packet count may be read, and then the packet count is written. In one embodiment, an address comes in for a record, or a paired counter. In an embodiment, an address comes in, and a double word which is 144 bits is modified based on the address and the offset and stored in memory core 110.
  • In various embodiments, after data is sent to the logic unit 120 and before the resulting data is sent back to an address in memory core 110, another command is received by memory core 110 attempting to access the data at the address where the resulting data is intended to be written. At that point, the data at that address in memory core 110 is considered to be stale.
  • Example Methods of Operation
  • With reference to FIG. 9, flow diagram 900 illustrates example procedures used by various embodiments. Flow diagram 900 includes processes and operations that, in various embodiments, are carried out by one or more of the devices illustrated in FIGS. 1-8 or via computer system 1000 or components thereof.
  • Although specific procedures are disclosed in flow diagram 900, such procedures are examples. That is, embodiments are well suited to performing various other operations or variations of the operations recited in the processes of flow diagram 900. Likewise, in some embodiments, the operations in flow diagram 900 may be performed in an order different than presented, not all of the operations described in one or more of these flow diagrams may be performed, and/or one or more additional operation may be added.
  • The following discussion sets forth in detail the operation of some example methods of operation of embodiments. With reference to FIG. 9, flow diagram 900 illustrates example procedures used by various embodiments. Flow diagram 900 includes some procedures that, in various embodiments, are carried out by a processor under the control of computer-readable and computer-executable instructions. In this fashion, procedures described herein and in conjunction with flow diagram 900 are or may be implemented using a computer (e.g., computer system 1000), in various embodiments. The computer-readable and computer-executable instructions can reside in any tangible computer readable storage media, such as, for example, in data storage features such as RAM (e.g., SRAM, DRAM, Flash, embedded DRAM, EPROM, EEPROM, etc.), ROM, and/or storage device. The computer-readable and computer-executable instructions, which reside on tangible computer readable storage media, are used to control or operate in conjunction with, for example, one or some combination of processor, or other similar processor(s). Although specific procedures are disclosed in flow diagram 900, such procedures are examples. That is, embodiments are well suited to performing various other procedures or variations of the procedures recited in flow diagram 900. Likewise, in some embodiments, the procedures in flow diagram 900 may be performed in an order different than presented and/or not all of the procedures described in one or more procedures described in FIG. 9 may be performed. It is further appreciated that one or more procedures described in flow diagram 900 may be implemented in hardware, or a combination of hardware and firmware, or a combination of hardware and software running thereon.
  • FIG. 9 is a flow diagram 900 of an example method of partitioning and incrementing a counter in a memory, in accordance with an embodiment. Reference will be made to elements of FIGS. 1, 6, 7 and 8 to facilitate the explanation of the operations of the method of flow diagram 900. In one embodiment, the method of flow diagram 900 describes the use of programmable partitionable counters 610C and a bandwidth engine 105 that performs a plurality of operations based on one command.
  • At 910, in one embodiment, an entry value is programmed wherein the entry value corresponds with the size of a partition. An entry value, (e.g., e0, e1, e2 or e3) is programmed and corresponds with the size of a partition (e.g., s0, s1, s2 or s3).
  • At 920, in one embodiment, a counter 610C is partitioned based on the size (s value) of the partition. In one embodiment, the entries must be programmed in an order such that if the first entry corresponds with the first partition of bits of a counter, the second entry corresponds with the next partition of bits in the counter, and so on.
  • At 930, in one embodiment, the memory receives a packet. In various embodiments, memory core 110 can receive a packet sent from either the logic unit 120, the write buffer 510, or the packet processor 150.
  • At 940, in one embodiment, functions are performed on the values in the partition counter 610C based on the packet comprising operations 950 and 960. In one embodiment, logic unit 120 updates the statistics (e.g., values in the counters).
  • At 950, in one embodiment, a bit/bits are added to a byte counter partition 710 in a partition of the partitioned counter 610C. As discussed herein, a packet typically contains the packet size. The packet size is added to the byte counter partition 710 within programmable partitionable counter 610C in some embodiments. In some embodiments, the packet size is added to a byte counter 610A.
  • At 960, in one embodiment, bits are added to a packet counter partition 720 in the partitioned counter 610C based on adding bits to a byte counter partition 710. As discussed herein, a dual operation may occur where instead of sending two commands to add to both byte counter partition 710 and packet counter partition 720, a dual operation adds to the packet counter partition 720 whenever the byte counter partition 710 changes. As discussed herein, the packet counter partition 720 may be incremented by 1 bit, more than 1 bit, or a number of bits based on the packet.
  • At 970, in one embodiment, the values in the partitioned counter are merged together. In some embodiments, programmable merge module 810 gathers values from a plurality of counters and sends them to memory partition 640.
  • At 980, in one embodiment, errors are corrected on a merged partitioned counter value. This operation occurs at error correction code modules 650. Note that error correcting may occur elsewhere in the bandwidth engine. For example, error correcting may occur at various times/places including, but not limited to: after data is read from the memory core 110, when bandwidth engine 105 receives a packet, before bandwidth engine 105 sends a packet, before an operation is performed on data in the logic unit 120 or metering logic unit 160, after an operation is performed on data in the logic unit 120 or metering logic unit 160, before data enters memory core 110, etc.
  • At 990, in one embodiment, the merged partitioned counter values are written to the memory partition 640. In some embodiments, the values may be sent to packet processor 150.
  • Example Computer System Environment
  • With reference now to FIG. 10, all or portions of some embodiments described herein are composed of computer-readable and computer-executable instructions that reside, for example, in computer-usable/computer-readable storage media of a computer system. That is, FIG. 10 illustrates one example of a type of computer (computer system 1000) that can be used in accordance with or to implement various embodiments which are discussed herein. It is appreciated that computer system 1000 of FIG. 10 is an example and that embodiments as described herein can operate on or within a number of different computer systems including, but not limited to, general purpose networked computer systems, embedded computer systems, routers, switches, server devices, client devices, various intermediate devices/nodes, stand alone computer systems, media centers, handheld computer systems, multi-media devices, and the like. Computer system 1000 of FIG. 10 is well adapted to having peripheral tangible computer-readable storage media 1002 such as, for example, a floppy disk, a compact disc, digital versatile disc, other disc based storage, universal serial bus “thumb” drive, removable memory card, and the like coupled thereto. The tangible computer-readable storage media is non-transitory in nature.
  • System 1000 of FIG. 10 includes an address/data bus 1004 for communicating information, and a processor 1006A coupled with bus 1004 for processing information and instructions. As depicted in FIG. 10, system 1000 is also well suited to a multi-processor environment in which a plurality of processors 1006A, 1006B, and 1006C are present. Conversely, system 1000 is also well suited to having a single processor such as, for example, processor 1006A. Processors 1006A, 10068, and 1006C may be any of various types of microprocessors. System 1000 also includes data storage features such as a computer usable volatile memory 1008, e.g., random access memory (RAM), coupled with bus 1004 for storing information and instructions for processors 1006A, 10068, and 1006C. System 1000 also includes computer usable non-volatile memory 1010, e.g., read only memory (ROM), coupled with bus 1004 for storing static information and instructions for processors 1006A, 10068, and 1006C. Also present in system 1000 is a data storage unit 1012 (e.g., a magnetic or optical disk and disk drive) coupled with bus 1004 for storing information and instructions. System 1000 may also include an alphanumeric input device 1014 including alphanumeric and function keys coupled with bus 1004 for communicating information and command selections to processor 1006A or processors 1006A, 10068, and 1006C. System 1000 may also include cursor control device 1016 coupled with bus 1004 for communicating user input information and command selections to processor 1006A or processors 1006A, 1006B, and 1006C. In one embodiment, system 1000 may also include display device 1018 coupled with bus 1004 for displaying information.
  • Referring still to FIG. 10, display device 1018 of FIG. 10, when included, may be a liquid crystal device, cathode ray tube, plasma display device or other display device suitable for creating graphic images and alphanumeric characters recognizable to a user. Cursor control device 1016, when included, allows the computer user to dynamically signal the movement of a visible symbol (cursor) on a display screen of display device 1018 and indicate user selections of selectable items displayed on display device 1018. Many implementations of cursor control device 1016 are known in the art including a trackball, mouse, touch pad, joystick or special keys on alphanumeric input device 1014 capable of signaling movement of a given direction or manner of displacement. Alternatively, it will be appreciated that a cursor can be directed and/or activated via input from alphanumeric input device 1014 using special keys and key sequence commands. System 1000 is also well suited to having a cursor directed by other means such as, for example, voice commands. System 1000 also includes an I/O device 1020 for coupling system 1000 with external entities. For example, in one embodiment, I/O device 1020 is a modem for enabling wired or wireless communications between system 1000 and an external network such as, but not limited to, the Internet. In another example I/O device 1020 uses SerDes technology.
  • Referring still to FIG. 10, various other components are depicted for system 1000. Specifically, when present, an operating system 1022, applications 1024, modules 1026, and data 1028 are shown as typically residing in one or some combination of computer usable volatile memory 1008 (e.g., RAM), computer usable non-volatile memory 1010 (e.g., ROM), and data storage unit 1012. In some embodiments, all or portions of various embodiments described herein are stored, for example, as an application 1024 and/or module 1026 in memory locations within RAM 1008, computer-readable storage media within data storage unit 1012, peripheral computer-readable storage media 1002, and/or other tangible computer-readable storage media.

Claims (25)

1. An integrated circuit device for receiving packets comprising:
a first counter for counting a number of said packets; and
a second counter for counting bytes of said packets, wherein said first counter and said second counter are configured to be incremented by a single command from a packet processor.
2. The integrated circuit of claim 1, wherein said first counter comprises:
a lifetime counter.
3. The integrated circuit of claim 1, wherein said second counter comprises:
a lifetime counter.
4. The integrated circuit of claim 1, wherein said integrated circuit is a system-on-chip (SoC).
5. The integrated circuit of claim 1, further comprising:
a logic unit comprising said first counter and said second counter.
6. The integrated circuit of claim 5, wherein said logic unit further comprises:
an error correcting code module.
7. The integrated circuit of claim 1, wherein at least one of said first counter and said second counter is at least 64-bit counter.
8. The integrated circuit of claim 1, wherein at least one of said first counter and said second counter are reset at a predetermined interval.
9. The integrated circuit of claim 1, further comprising:
an incrementer for incrementing said first counter.
10. The integrated circuit of claim 1, wherein said single command comprises:
incrementing said first counter by 1 and incrementing said second counter by packet size.
11. The integrated circuit of claim 1, further comprising:
a programmable partitionable counter comprising said first counter and said second counter.
12. The integrated circuit of claim 1, wherein said first counter and said second counter are fixed width.
13. A data traffic governing system comprising:
an integrated circuit device comprising:
a first counter for counting a number of said packets; and
a second counter for counting bytes of said packets, wherein said first counter and said second counter are configured to be incremented by a single command.
14. The data traffic governing system of claim 13, wherein said first counter comprises:
a lifetime counter.
15. The data traffic governing system of claim 13, wherein said second counter comprises:
a lifetime counter.
16. The data traffic governing system of claim 13, wherein said integrated circuit is an ASIC.
17. The data traffic governing system of claim 13, further comprising:
a logic unit comprising said first counter and said second counter.
18. The data traffic governing system of claim 17, wherein said logic unit further comprises:
an error correction code module.
19. The data traffic governing system of claim 13, wherein said first counter is a 64-bit counter and said second counter is a 64-bit counter.
20. The data traffic governing system of claim 13, wherein said first counter and said second counter are reset at a predetermined interval.
21. The data traffic governing system of claim 13, further comprising:
an incrementer for incrementing said first counter.
22. The data traffic governing system of claim 13, wherein said single command comprises:
incrementing said first counter by 1 and incrementing said second counter by packet size.
23. The data traffic governing system of claim 13, further comprising:
a data packet processor, wherein said single command is from said data packet processor.
24. The data traffic governing system of claim 13, further comprising:
a programmable partitionable counter comprising said first counter and said second counter.
25. The data traffic governing system of claim 13, wherein said first counter and said second counter are fixed width.
US13/911,999 2010-01-29 2013-06-06 Dual counter Abandoned US20130329555A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US13/912,033 US9667546B2 (en) 2012-06-06 2013-06-06 Programmable partitionable counter
US13/911,999 US20130329555A1 (en) 2012-06-06 2013-06-06 Dual counter
US14/503,382 US11221764B2 (en) 2010-01-29 2014-09-30 Partitioned memory with shared memory resources and configurable functions

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261656377P 2012-06-06 2012-06-06
US13/911,999 US20130329555A1 (en) 2012-06-06 2013-06-06 Dual counter

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/841,025 Continuation-In-Part US9496009B2 (en) 2010-01-29 2013-03-15 Memory with bank-conflict-resolution (BCR) module including cache

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/838,971 Continuation-In-Part US20130329553A1 (en) 2010-01-29 2013-03-15 Traffic metering and shaping for network packets

Publications (1)

Publication Number Publication Date
US20130329555A1 true US20130329555A1 (en) 2013-12-12

Family

ID=49715224

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/838,971 Abandoned US20130329553A1 (en) 2010-01-29 2013-03-15 Traffic metering and shaping for network packets
US13/911,999 Abandoned US20130329555A1 (en) 2010-01-29 2013-06-06 Dual counter
US13/912,033 Active 2034-05-28 US9667546B2 (en) 2010-01-29 2013-06-06 Programmable partitionable counter

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/838,971 Abandoned US20130329553A1 (en) 2010-01-29 2013-03-15 Traffic metering and shaping for network packets

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/912,033 Active 2034-05-28 US9667546B2 (en) 2010-01-29 2013-06-06 Programmable partitionable counter

Country Status (1)

Country Link
US (3) US20130329553A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130332708A1 (en) * 2012-06-06 2013-12-12 Mosys Inc. Programmable partitionable counter

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7275826B2 (en) * 2019-05-10 2023-05-18 オムロン株式会社 counter unit

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5003390A (en) * 1990-03-26 1991-03-26 Pbse Enterprises, Inc. Search and lock technique for reliable acquisition of data transmitted via television signals
US5410721A (en) * 1992-12-24 1995-04-25 Motorola, Inc. System and method for incrementing a program counter
US6101591A (en) * 1998-03-25 2000-08-08 International Business Machines Corporation Method and system for selectively independently or simultaneously updating multiple system time clocks in an MPEG system
US6192466B1 (en) * 1999-01-21 2001-02-20 International Business Machines Corporation Pipeline control for high-frequency pipelined designs
US6310599B1 (en) * 1995-12-22 2001-10-30 Cirrus Logic, Inc. Method and apparatus for providing LCD panel protection in an LCD display controller
US20020046324A1 (en) * 2000-06-10 2002-04-18 Barroso Luiz Andre Scalable architecture based on single-chip multiprocessing
US6799262B1 (en) * 2000-09-28 2004-09-28 International Business Machines Corporation Apparatus and method for creating instruction groups for explicity parallel architectures
US20050240745A1 (en) * 2003-12-18 2005-10-27 Sundar Iyer High speed memory control and I/O processor system
US6970426B1 (en) * 2003-05-14 2005-11-29 Extreme Networks Rate color marker
US20060101152A1 (en) * 2004-10-25 2006-05-11 Integrated Device Technology, Inc. Statistics engine
US20080174329A1 (en) * 2007-01-18 2008-07-24 Advanced Micro Devices, Inc. Method and device for determining an operational lifetime of an integrated circuit device
US7539489B1 (en) * 2003-04-04 2009-05-26 Veriwave, Incorporated Location-based testing for wireless data communication networks
US7698412B2 (en) * 2003-12-31 2010-04-13 Alcatel Lucent Parallel data link layer controllers in a network switching device
US7724814B2 (en) * 2006-08-15 2010-05-25 Texas Instruments Incorporated Methods and apparatus for decision feedback equalization with dithered updating
US8345696B2 (en) * 2008-08-25 2013-01-01 Fujitsu Limited Router and packet discarding method
US20130222109A1 (en) * 2012-02-23 2013-08-29 Infineon Technologies Ag System-Level Chip Identify Verification (Locking) Method with Authentication Chip

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999474A (en) * 1998-10-01 1999-12-07 Monolithic System Tech Inc Method and apparatus for complete hiding of the refresh of a semiconductor memory
US6789116B1 (en) * 1999-06-30 2004-09-07 Hi/Fn, Inc. State processor for pattern matching in a network monitor device
JP3745930B2 (en) * 2000-02-23 2006-02-15 富士通株式会社 Packet insertion interval control device and packet insertion interval control method
US7538772B1 (en) * 2000-08-23 2009-05-26 Nintendo Co., Ltd. Graphics processing system with enhanced memory controller
US6901052B2 (en) * 2001-05-04 2005-05-31 Slt Logic Llc System and method for policing multiple data flows and multi-protocol data flows
TWI229276B (en) * 2003-07-23 2005-03-11 Tatung Co Protocol method of reusable hardware IP
US6931354B2 (en) * 2003-11-13 2005-08-16 International Business Machines Corporation Method, apparatus and computer program product for efficient, large counts of per thread performance events
US7385985B2 (en) * 2003-12-31 2008-06-10 Alcatel Lucent Parallel data link layer controllers in a network switching device
US20070266370A1 (en) * 2004-09-16 2007-11-15 Myers Glenford J Data Plane Technology Including Packet Processing for Network Processors
US7451338B2 (en) * 2005-09-30 2008-11-11 Intel Corporation Clock domain crossing
US7532700B2 (en) 2006-08-21 2009-05-12 International Business Machines Corporation Space and power efficient hybrid counters array
WO2009042089A1 (en) * 2007-09-26 2009-04-02 Wms Gaming Inc. Wagering game machines with non-volatile memory
US8074132B2 (en) * 2008-10-28 2011-12-06 Broadcom Corporation Protecting data on integrated circuit
US20100158023A1 (en) * 2008-12-23 2010-06-24 Suvhasis Mukhopadhyay System-On-a-Chip and Multi-Chip Systems Supporting Advanced Telecommunication Functions
US20100205293A1 (en) * 2009-02-09 2010-08-12 At&T Mobility Ii Llc Comprehensive policy framework for converged telecommunications networks
WO2011055168A1 (en) 2009-11-06 2011-05-12 Freescale Semiconductor, Inc. Area efficient counters array system and method for updating counters
US11221764B2 (en) * 2010-01-29 2022-01-11 Mosys, Inc. Partitioned memory with shared memory resources and configurable functions
US20130329553A1 (en) * 2012-06-06 2013-12-12 Mosys, Inc. Traffic metering and shaping for network packets
CN102812670B (en) * 2010-03-22 2015-11-25 飞思卡尔半导体公司 The method of token bucket management devices and management token bucket
US8769088B2 (en) * 2011-09-30 2014-07-01 International Business Machines Corporation Managing stability of a link coupling an adapter of a computing system to a port of a networking device for in-band data communications

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5003390A (en) * 1990-03-26 1991-03-26 Pbse Enterprises, Inc. Search and lock technique for reliable acquisition of data transmitted via television signals
US5410721A (en) * 1992-12-24 1995-04-25 Motorola, Inc. System and method for incrementing a program counter
US6310599B1 (en) * 1995-12-22 2001-10-30 Cirrus Logic, Inc. Method and apparatus for providing LCD panel protection in an LCD display controller
US6101591A (en) * 1998-03-25 2000-08-08 International Business Machines Corporation Method and system for selectively independently or simultaneously updating multiple system time clocks in an MPEG system
US6192466B1 (en) * 1999-01-21 2001-02-20 International Business Machines Corporation Pipeline control for high-frequency pipelined designs
US20020046324A1 (en) * 2000-06-10 2002-04-18 Barroso Luiz Andre Scalable architecture based on single-chip multiprocessing
US6799262B1 (en) * 2000-09-28 2004-09-28 International Business Machines Corporation Apparatus and method for creating instruction groups for explicity parallel architectures
US7539489B1 (en) * 2003-04-04 2009-05-26 Veriwave, Incorporated Location-based testing for wireless data communication networks
US6970426B1 (en) * 2003-05-14 2005-11-29 Extreme Networks Rate color marker
US20050240745A1 (en) * 2003-12-18 2005-10-27 Sundar Iyer High speed memory control and I/O processor system
US7698412B2 (en) * 2003-12-31 2010-04-13 Alcatel Lucent Parallel data link layer controllers in a network switching device
US20060101152A1 (en) * 2004-10-25 2006-05-11 Integrated Device Technology, Inc. Statistics engine
US7724814B2 (en) * 2006-08-15 2010-05-25 Texas Instruments Incorporated Methods and apparatus for decision feedback equalization with dithered updating
US20080174329A1 (en) * 2007-01-18 2008-07-24 Advanced Micro Devices, Inc. Method and device for determining an operational lifetime of an integrated circuit device
US8345696B2 (en) * 2008-08-25 2013-01-01 Fujitsu Limited Router and packet discarding method
US20130222109A1 (en) * 2012-02-23 2013-08-29 Infineon Technologies Ag System-Level Chip Identify Verification (Locking) Method with Authentication Chip

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130332708A1 (en) * 2012-06-06 2013-12-12 Mosys Inc. Programmable partitionable counter
US9667546B2 (en) * 2012-06-06 2017-05-30 Mosys, Inc. Programmable partitionable counter

Also Published As

Publication number Publication date
US9667546B2 (en) 2017-05-30
US20130332708A1 (en) 2013-12-12
US20130329553A1 (en) 2013-12-12

Similar Documents

Publication Publication Date Title
TWI668975B (en) Circuit and method for packet shaping in a network processor
TWI559706B (en) Packet scheduling in a network processor
TWI566551B (en) Packet output processing
US7715419B2 (en) Pipelined packet switching and queuing architecture
US6356962B1 (en) Network device and method of controlling flow of data arranged in frames in a data-based network
US20110032947A1 (en) Resource arbitration
US7957392B2 (en) Method and apparatus for high-performance bonding resequencing
US20150281126A1 (en) METHODS AND APPARATUS FOR A HIGH PERFORMANCE MESSAGING ENGINE INTEGRATED WITHIN A PCIe SWITCH
US20020013821A1 (en) Method and network device for creating buffer structures in shared memory
US7046625B1 (en) Method and system for routing network-based data using frame address notification
US7293158B2 (en) Systems and methods for implementing counters in a network processor with cost effective memory
CN112565115A (en) Transmission method and device of TCP data, computer equipment and storage medium
US7664028B1 (en) Apparatus and method for metering and marking data in a communication system
CN114205304A (en) Flow control method, device, equipment and storage medium based on double leaky buckets
US9667546B2 (en) Programmable partitionable counter
US20180136905A1 (en) First-in-first-out buffer
US10817177B1 (en) Multi-stage counters
US8745455B2 (en) Providing an on-die logic analyzer (ODLA) having reduced communications
US20060161647A1 (en) Method and apparatus providing measurement of packet latency in a processor
US20080033908A1 (en) Method and system for data processing in a shared database environment
EP3863248A1 (en) Protocol data unit end handling with fractional data alignment and arbitration fairness
CN117834570A (en) Data packet processing method and device of transmission system, electronic equipment and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: MOSYS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PATEL, JAY;MILLER, MICHAEL;MORRISON, MICHAEL;SIGNING DATES FROM 20130731 TO 20130807;REEL/FRAME:036198/0277

AS Assignment

Owner name: INGALLS & SNYDER LLC, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:MOSYS, INC.;REEL/FRAME:038081/0262

Effective date: 20160314

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: INGALLS & SNYDER LLC, NEW YORK

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PERASO INC. F/K/A MOSYS, INC.;REEL/FRAME:061593/0094

Effective date: 20221003