NUMA
美
英 
- 网络内存访问;非一致性内存访问(non-uniform memory access);非一致内存访问
例句
The main thing done in the relational database was to use "soft NUMA" and port mapping to get a good distribution of work within the system.
我们在关系型数据库中完成了被称为“SoftNUMA”的技术,它通过端口映射在系统内部得以获得良好的分布式工作效果。
All traffic enters through a single port and is distributed on a round-robin basis to any available NUMA node.
所有通信流量都通过一个单独的端口输入并分布到任何可用的NUMA节点。
To understand how pages of memory from the buffer cache are assigned when using NUMA, see Growing and Shrinking the Buffer Pool Under NUMA.
若要了解使用NUMA时如何分配缓冲区高速缓存中的内存页,请参阅使用NUMA扩展和收缩缓冲池。
Systems with a large number of processors may find it advantageous to recompile against the NUMA user-land API's added in RHEL4.
在拥有大量处理器的系统中,可能会发现借助RHEL4中所增加的NUMA用户空间API进行重新编译会有好处。
NUMA, like SMP, allows users to harness the combined power of multiple processors, with each processor accessing a common memory pool.
numa与smp相似,让用户能驾驭多个处理器结合起来的能力,每个处理器能存取一个公共的存储器组。
NUMA reduces the contention for a system's shared memory bus by having more memory buses and fewer processors on each bus.
NUMA通过在每个总线使用更多内存总线和更少处理器来减少系统共享内存总线的争用。
Any operation running on a single NUMA node can only use buffer pages from that node .
针对单个NUMA节点执行的任何操作都只能使用该节点中的缓冲区页。
The ratio of the cost to access foreign memory over that for local memory is called the NUMA ratio.
访问外部内存的开销与访问本地内存的开销比率称为NUMA比率。
For high-end machines, new features target performance improvements, scalability, throughput, and NUMA support for SMP machines.
对高端的机器来说,新特性针对的是性能改进、可扩展性、吞吐率,以及对SMP机器NUMA的支持。
The number of CPUs within a NUMA node depends on the hardware vendor.
NUMA节点中的CPU数量取决于硬件供应商。
This provides automatic load balancing among the NUMA nodes .
它提供了NUMA节点间的自动负载平衡。
On a mail-server benchmark, we show a 39% improvement in performance by automatically splitting the application among multiple NUMA domains.
在邮件服务器的测试评分中,通过自动在多个NUMA域中切分应用程序,我们的性能得到了39%的提升。
XXI. To begin from Romulus : he left no children, and Numa Pompilius left none that could be of use to the republic.
就从罗慕路斯开始吧,他没有子嗣,努马·蓬皮利乌斯也没有留下对国家有用的孩子。
Within a NUMA node, the connection is run on the least loaded scheduler on that node.
在NUMA节点内,连接按照该节点上负载最小的计划程序运行。
The NUMA architecture was designed to surpass the scalability limits of the SMP architecture .
NUMA体系结构在设计上已超越了SMP体系结构在伸缩性上的限制。
Not just for SMP or NUMA, but for everything from a single-node UP system to a massively clustered system.
不仅是SMP或NUMA,而是从一个单一的操作系统点发展到巨大的操作系统群组。
In NUMA systems, each processor is close to some parts of memory and further from others.
在NUMA系统中,每个处理器距某部分内存较近而距其他内存较远。
In a NUMA architected system, CPUs are arranged in smaller sub-systems called pods.
在NUMA架构的系统中,CPU排列在叫做pods的较小的子系统中。
The NUMA architecture can increase processor speed without increasing the load on the processor bus.
NUMA体系结构可以在不增加处理器总线负载的情况下提高处理器速度。
This topic describes how pages of memory from the buffer pool are assigned when using non-uniform memory access (NUMA).
本主题介绍,在使用非一致性内存访问(NUMA)时,如何分配缓冲池中的内存页。
I design and implement the method of fault-containment and recovery arithmetic, effectively solve the problem of fault in CC -NUMA computer.
设计并实现了故障限制方法和故障恢复算法,有效的解决了CC-NUMA计算机的故障处理问题。
NUMA architecture provides a scalable solution to this problem.
NUMA体系结构为此问题提供了可扩展的解决方案。
Because NUMA uses local and foreign memory, it will take longer to access some regions of memory than others. Local memory.
由于NUMA同时使用本地内存和外部内存,因此,访问某些内存区域的时间会比访问其他内存区域的要长。
All NUMA topics have been reorganized for this release.
已重新组织了此版本中的所有NUMA主题。
Applications seeking additional performance gains can use user-land NUMA APIs.
设法提高性能的应用程序可以使用user-landNUMAAPI。
Similarly, buffer pool pages are distributed across hardware NUMA nodes.
同样,缓冲池页将跨硬件NUMA节点进行分布。
On NUMA hardware, some regions of memory are on physically different buses from other regions.
在NUMA硬件上,有些内存区域与其他区域位于不同的物理总线上。
When using NUMA, the max server memory and min server memory values are divided evenly among NUMA nodes.
使用NUMA时,会在NUMA节点之间平均划分maxservermemory和minservermemory的值。
That means when users run out of capacity on their SMP servers, they can move their applications to NUMA servers with relative ease.
这意味着,当用户用尽SMP服务器的能力时,他们能较容易地将其应用程序移到NUMA服务器上。
NUMA hardware is provided by the computer manufacturer.
NUMA硬件由计算机制造商提供。
Affinitizing connections to specific processors when using Non-Uniform Memory Access (NUMA).
使用非一致内存访问(NUMA)时,将连接与特定处理器关联。
Applications seeking additional performance gains can use user-land NUMA API's.
希望能进一步提高性能的应用程序可以使用用户空间NUMAAPI。
More than one port can be mapped to the same NUMA nodes.
可以将多个端口映射到同一NUMA节点。
You cannot create a soft-NUMA that includes CPUs from different hardware NUMA nodes.
无法创建包含来自不同硬件NUMA节点的CPU的软件NUMA。
Enabling memory location optimizations for NUMA multi-CPU systems (-XX: +UseNUMA).
为NUMA多CPU系统启用内存位置优化(-XX:+UseNUMA)。
Soft-NUMA does not provide memory to CPU affinity.
软件NUMA不提供内存与CPU的关联。
Number of pages that come from a different NUMA node.
来自其他NUMA节点的页数。
There is an instance of the Buffer Node object for each NUMA node in use.
对于正在使用的每个NUMA节点,都有一个BufferNode对象实例。
It allows you to monitor the SQL Server buffer pool page distribution for each non-uniform memory access (NUMA) node.
通过它,您可以监视每个非一致性内存访问(NUMA)节点的SQLServer缓冲池页分布。
The O(1) scheduler also allows for load-balancing across CPUs and NUMA-aware load-balancing.
0(1)调度程序还允许跨CPU的负载平衡和NUMA-aware负载平衡。