您要查找的是不是:
- RAM Data-Side Local Memory Bus 数据侧本地存储器总线
- With SMP, all memory access is posted to the same shared memory bus. 通过SMP,所有的内存访问都传递到相同的共享内存总线。
- The external memory bus may be isolated from MCU and the bus is released to the DMA controller. 外部资料汇流排可以与微控制器隔离开来,好释放给DMA控制器使用。
- The core and the memory bus will run faster and faster, it seems more PWR/GND need to supply enough peak current. 内核与存储器总线运行速度越来越快,似乎更多的PWR/GND需要供应足够的峰值电流。
- Local memory is the memory that is on the same node as the CPU currently running the thread. 本地内存是指与当前正在运行线程的CPU位于同一节点上的内存。
- The ratio of the cost to access foreign memory over that for local memory is called the NUMA ratio. 访问外部内存的开销与访问本地内存的开销比率称为NUMA比率。
- Sets the handle to the local memory that will be used by a multiple-line edit control. 设置为多行编辑控件使用的本地内存的句柄。
- The master clock can be asynchronous with PCM data side clock or ADPCM data side clock. Using an 8-depth async FIFO solves the synchronization and exchange of data be-tween different clock domains. 主时钟与PCM数据端时钟或ADPCM数据端时钟可以是异步的,不同的时钟控制范围内的数据同步或交换是通过一个深度为8的FIFO来实现的;
- NUMA alleviates these bottlenecks by limiting the number of CPUs on any one memory bus and connecting the various nodes by means of a high speed interconnection. NUMA通过限制任何一条内存总线上的CPU数量并依靠高速互连来连接各个节点,从而缓解了这些瓶颈状况。
- The master clock can be asynchronous with PCM data side clock or ADPCM data side clock.Using an 8-depth async FIFO solves the synchronization and exchange of data be-tween different clock domains. 主时钟与PCM数据端时钟或ADPCM数据端时钟可以是异步的,不同的时钟控制范围内的数据同步或交换是通过一个深度为8的FIFO来实现的;
- This works fine for a relatively small number of CPUs, but not when you have dozens, even hundreds, of CPUs competing for access to the shared memory bus. 这种方式非常适用于CPU数量相对较少的情况,但不适用于具有几十个甚至几百个CPU的情况,因为这些CPU会相互竞争对共享内存总线的访问。
- It is more efficient for a thread to access memory from a buffer page that is allocated on the local memory than to access it from foreign memory. 线程从分配到本地内存的缓冲区页访问内存比从外部内存进行访问效率更高。
- Physical shared memory bus, message translating LAN and copy shared memory network are the main interconnecting technology using in distributed real-time simulation. 分布式仿真系统可采用的联接方式主要有物理共享内存总线、消息传递网络和复制共享内存网络三种。
- For example, under memory pressure, the buffer pool will not make any effort to free up foreign memory pages before local memory pages. 例如,在内存压力下,缓冲池不会努力尝试在释放本地内存页前释放外部内存页。
- CPU might therefore be unable to issue memory operations at peak speeds since it has to compete with the device in order to obtain access to the memory bus. 因为它必须为了要获得对记忆体汇流排的存取,以装置竞争,一个中央处理器可能因此不能够以尖峰速度发行记忆体运算。
- This conflict occurs because another user has recently modified the table you are working on, but the Table Designer retains the older version of the table in your local memory. 之所以发生这种冲突,是因为其他用户最近修改了您正在使用的表,而表设计器在本地内存中保留了表的更早版本。
- "Non-uniform memory access" refers to this difference between the speed at which processors can access local memory and the speed at which they can access distant memory. “非一致性内存访问”指处理器访问本地内存的速度与访问远程内存的速度之间的这种差异。
- Will right amount finalize the design wax enters the centre of the palm, after chafing slightly, wipe in the hair of two side locally. 将适量定型发蜡倒入掌心,稍加摩擦后抹在头发两侧的位置上。
- Now we will look at how well some of the common grid middleware solutions in use today individually address the computation and data sides of this problem. 现在我们首先来了解一下目前使用的一些通用网格中间件解决方案单独解决这个问题的计算端和数据端的效果如何。
- NUMA reduces some of the bus congestion of SMP by having the processors in a node communicate with one another and their local memories via separate, smaller buses. NUMA通过分开的较小总线让每个节点内的处理器互相通信并与本地存储器通信来减轻SMP的总线阻塞情况。
