This article has been taken from the technet:- refer the link
Though System Center Configuration
Manager 2007 has been out for some time now (currently at R3 SP2), and System
Center Configuration Manager 2012 is due to be out soon, there still lingers
occasional questions such as, “How do Clients communicate with the Site Servers,”
and “How big are the packets being sent,” related to the issue of network
latency. This post attempts to address this issue for companies where network
latency is a factor in systems management, gathered from various internal and
external Microsoft sources (i.e. Product Team and Premiere Field Engineers),
and the numbers presented here will likely to be similar to what you might see
in ConfigMgr 2012. The numbers and figures here are provided as-is, and
therefore subject to revision.
The following diagram represents a
depiction of the ConfigMgr server-server (i.e. site-to-site) and server-client
communications, focused mainly on what is happening in a Secondary Site. In an
ideal design, server-client communications only occurs intra-site (over the LAN),
and does not traverse inter-site (over the WAN). This allows for the scenarios
where:
- Inter-site communications should only occur via
server-to-server, over the WAN. Clients at a Secondary Site never directly
communicate with a Primary Site.
- Intra-site communications can only occur over the LAN.
- Primary Site Distribution Point sends data over the WAN
or LAN to a Secondary Site Distribution Point, via throttled SMB.
- Clients at a Secondary Site gets installation media
from the local Secondary Site Distribution Point using BITS, over the LAN.
- Clients at a Secondary Site sends inventory to/gets
policy from the Secondary Site Proxy Management Point, over the LAN.
- Secondary Site Proxy Management Point sends consolidated
data over the WAN/LAN to the Primary Site Management Point, via throttled
SMB.
However, the following caveats
should be taken into account:
- There’s nothing in ConfigMgr to prevent multiple
ConfigMgr sites on a LAN network.
- There’s nothing in ConfigMgr to prevent ConfigMgr
Clients, physically local to a site, from being located across a WAN link
if the boundaries are configured this way (not the best design).
- The diagram assumes that you have Proxy MPs and DPs
roles on the Secondary Site Servers. Clients at a Secondary Site
communicate with their Primary Site if there is no Proxy MP or DP on the
Secondary Site, and those roles exist on the Primary Site.
- By default, Clients get their initial installation bits
from a MP. Frequently, this is over a WAN connection.
- Clients at a Secondary Site sends inventory to/gets
policy from the Secondary Site Proxy Management Point, over the LAN/WAN, if
the Proxy MP exists.
- BDPs should be connected to sites with the least number
of network hops to the DP, and so they may not be necessarily connected to
a Secondary Site.
Additionally, the following points
should be noted for the diagram:
- Server-to-server traffic is SMB & PPTP, to DP is
SMB & RPC.
- BITS is to run intra-site (LAN) only, and not
inter-site (WAN).
- BITS does not kick-in until the action is greater than
512 KB.
- BITS can be throttled for computers via Group Policy.
- BITS can be throttled for Branch DPs via ConfigMgr.
The diagram is only meant to
illustrate the ConfigMgr server-server and server-client communications. For
details of the protocols and ports used, see Ports Used by Configuration Manager.
Primary
and Secondary Site Considerations
Primary Sites
- A primary site stores
Configuration Manager 2007 data for itself and all the sites beneath it in a
SQL Server database.
Secondary Sites
- The secondary site forwards the
information it gathers from Configuration Manager 2007 clients, such as
computer inventory data and Configuration Manager 2007 system status
information, to its parent site.
A proxy management point is a
management point installed at a secondary site. The proxy management point is used
by Configuration Manager Clients to retrieve policy at secondary sites
attached to their assigned primary site. The proxy management point receives
inventory data and status messages and sends them to the secondary site server
to be forwarded to the parent site. Using a proxy management point increases
bandwidth usage efficiency for clients at secondary sites.
Boundaries are used to assign
clients to a specific Configuration Manager 2007 site and should be unique to
each site.
…clients outside of the protected
boundaries will not be able to access the distribution point or state migration
point roles on that site system.
A Protected Distribution Point is
not a role, but how the Boundaries of a Distribution Point are configured. This
is so that Client requests for installation media do not go over the WAN, and
also allows BITS from the Distribution Point to the Client, thus throttling the
traffic between the Distribution Points and the Clients:
Server distribution point site
systems can be BITS-enabled to allow Configuration Manager 2007 clients to
throttle network usage during package downloads. To install a BITS-enabled
distribution point, IIS must first be installed on the site system computer and
BITS-enabled in IIS.
Throttling
Server to Server RPC
By default, the data transfer rate
that Configuration Manager 2007 sites will use to send data to an address
(another site) is unlimited at all times. This means that when a site is
transmitting data to another site, it could potentially consume 100% of the
available bandwidth during the data transfer. However, you can limit how much
connection bandwidth is used at different times of the day by setting maximum
data transfer rate for the address.
Throttling
Server to Client BITS
Generally, it is not recommended to
throttle BITS. As described in About BITS:
BITS uses idle network bandwidth to
transfer the files and will increase or decrease the rate at which files are
transferred based on the amount of idle network bandwidth available.
Because BITS already “provides one
foreground and three background priority levels that you use to prioritize
transfer jobs,” throttling BITS is almost like double throttling. Basically,
throttling BITS means that BITS’ algorithm is being limited from taking
advantage of the idle network bandwidth.
Drawbacks of throttling BITS include
Clients taking a long time to install packages, and a limited ability to push
out “Emergency” packages, e.g. Windows Updates. The critical piece is to TEST
the requirement in a pilot first, to ensure that the right combination of
settings makes a network team happy.
To throttle BITS, Group Policy
settings can be used for Windows 7, as shown below, from the Bits.admx
in WindowsServer2008R2andWindows7GroupPolicySettings.xlsx. Deprecated
Windows XP policies have been removed from the list below. Specifically, you
can use the Policy, Set up a work schedule to limit the maximum network
bandwidth used for BITS background transfers, to limit the bandwidth. It is
discouraged to use Allow BITS Peercaching to allow client peers to share
BITS data, but instead, use a Branch Distribution Point role. ConfigMgr 2012
will have the ability to throttle BITS per ConfigMgr Collection, if it is
required.
Policy Setting Name
|
Allow BITS Peercaching
|
Do not allow the BITS client to
use Windows BranchCache
|
Do not allow the computer to act
as a BITS Peercaching client
|
Do not allow the computer to act
as a BITS Peercaching server
|
Limit the age of files in the BITS
Peercache
|
Limit the BITS Peercache size
|
Limit the maximum BITS job
download time
|
Limit the maximum network
bandwidth used for Peercaching
|
Limit the maximum number of BITS
jobs for each user
|
Limit the maximum number of BITS
jobs for this computer
|
Limit the maximum number of files
allowed in a BITS job
|
Limit the maximum number of ranges
that can be added to the file in a BITS job
|
Set up a maintenance schedule to
limit the maximum network bandwidth used for BITS background transfers
|
Set up a work schedule to limit
the maximum network bandwidth used for BITS background transfers
|
Table 1: Bits.admx from WindowsServer2008R2andWindows7GroupPolicySettings.xlsx
BranchCache
and Branch Distribution Points
There are several articles that
reference BranchCache and Branch Distribution Points, including:
But what are the reasons to deploy
one or another? Or even both or neither? To answer this, it helps to look at
the dichotomy between BranchCache and Branch Distribution Points. Most
importantly, it should be understood that the BranchCache and Branch
Distribution Point (BDP) roles are completed different.
The BDP is specific to ConfigMgr,
and BranchCache is a technology built into Windows 2008 R2. The BDP is actually
a function of the ConfigMgr client and even a Windows XP machine can function
as a BDP. With a BDP, you can also conduct On Demand Packaging and pre-stage
content on the BDP, if you need to do so.
BranchCache is specific to Windows
2008 R2, and only Window Vista (with BITS 4.0) and Windows 7 support
BranchCache. For BranchCache, all of the servers that participate in
BranchCache need to be Windows 2008 R2, with BranchCache enabled.
If you have a server operating
system in a remote office, you are better off making the office a Secondary
Site and putting a Distribution Point (not BDP) there. If you have just
workstations, the typical practice is to use BDP on the best workstation, as
you are allowed more control of packages within the ConfigMgr console, and
after all, you are using ConfigMgr to manage your software distribution.
However, as referenced above in Configuring SCCM and BranchCache, you can use
ConfigMgr and BranchCache together if it is appropriate for your environment,
and as noted in this article:
BranchCache is just one more option available
to administrators to help deliver content to clients efficiently and reduce
load on the network.
A VSD of the diagram above can be
found here: