-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to generate large size lookup engine #32
Comments
Hi, A workaround might be: write your own lookup table based on external DRAM in Verilog/VHDL and integrate it as P4 external function. However, this will limit the bandwidth (lookups/s) as the external memory has high latencies and a low bandwidth (at least for table lookups). |
Hi Ralf,
Thanks for your reply.
I was hoping to use QDR II SDRAM of 72Mb size in NetFPGA specs. Is that
usable for lookup ?
regs
Ganesh
…On Thu, Sep 5, 2019 at 1:25 PM Ralf Kundel ***@***.***> wrote:
Hi,
a table (or any lookup) is realized with on-chip memory (block ram). If
you want 1 million entries of each ONLY a single IPv4 address (4byte) and
you have no overhead, the total memory requirement will be 4MB.
The NetFPGA-SUME is based on a Virtex-7 FPGA which has memory cells of 4
KB. Thus you will need many of them and that's not fesaible for the
synthesis tool as they must be accesible in one single clock-cycle.
In total, the FPGA has around 53 Mbit=6.6MB of on-chip memory:
https://www.xilinx.com/products/silicon-devices/fpga/virtex-7.html#productTable
(VX690T)
A workaround might be: write your own lookup table based on external DRAM
in Verilog/VHDL and integrate it as P4 external function. However, this
will limit the bandwidth (lookups/s) as the external memory has high
latencies and a low bandwidth (at least for table lookups).
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#32?email_source=notifications&email_token=AE6KYXEUY2KECQXYPD5ISB3QIC3NZA5CNFSM4IT2E7QKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD56GLBY#issuecomment-528246151>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AE6KYXE4JVY2IV3PQHLANB3QIC3NZANCNFSM4IT2E7QA>
.
|
Yes, in theory you can use every memory. However: I think you have to implement something by your own as I think (I don't know it), SDNet does not support HLS for external QDRII memory. regards, |
Hi,
Direct lookup engine provided by SDNet 2018.1 is limited to a depth of 64k entries. I need a engine with larger size to hold around a million entries.
Any pointers ?
regs
Ganesh
The text was updated successfully, but these errors were encountered: