html NGRF-NET-001 — Residential Campus Network Infrastructure Technical Specification
DOCUMENT NO. NGRF-NET-001 REVISION 1.0 — BASELINE ISSUED April 10, 2026 STATUS APPROVED FOR DESIGN
PREPARED BY F.G.D. Robledo ORGANIZATION NewGen / NGRF PAGES ~85 (equivalent) LANG. EN-PH
Technical Specification  ·  Network Infrastructure  ·  IEEE 802.11be / Wi-Fi 7

Residential Campus Fiber Backbone and Wireless Infrastructure

Full Design, Component Specification, Rationale, and Performance Analysis for a Campus-Grade Enterprise Network Deployed in a Multi-Wing Residential Compound with Underground Facility

Abstract. This document constitutes the complete, authoritative technical specification for the residential campus network of F.G.D. Robledo, designated NGRF-NET-001. The architecture herein describes a multi-ISP aggregated WAN backbone terminating into a dual-edge high-availability routing layer, a dual central core switching fabric operating at four hundred and eighty gigabits per second per unit, a distributed wing core topology with per-wing Cisco Catalyst switching, Ubiquiti Enterprise PoE++ access switching, and a dense wireless fabric composed exclusively of IEEE 802.11be (Wi-Fi 7) quad-band access points. The aggregate WAN capacity of this design is 12 Gbps. The wireless fabric achieves a theoretical per-AP aggregate throughput of ≥19.4 Gbps via Multi-Link Operation across four discrete radio bands. All components are specified, rationalized, and benchmarked herein with full technical and engineering justification. This specification shall serve as the single source of truth for procurement, deployment, commissioning, and future expansion of the described network.
Author / Principal Engineer
Technical Reviewer
Date of Approval
§ REV
Revision History
Document Change Control Record
TABLE REV-01 Document Revision Log
Rev.DateAuthorDescription of ChangeStatus
0.12026-03-01F.G.D. RobledoInitial draft — topology and ISP layer definedDRAFT
0.52026-03-18F.G.D. RobledoWireless section expanded; Wi-Fi 7 MLO parameters added; VLAN schema first draftDRAFT
0.92026-04-02F.G.D. RobledoPower budget, PoE calculations, cabling spec, QoS framework addedREVIEW
1.02026-04-10F.G.D. RobledoBaseline release — all sections complete; approved for design phaseAPPROVED
Table of Contents
§ 1
Scope, Purpose, and Applicability
Document Governing Mandate

This Technical Specification, designated NGRF-NET-001, establishes the complete, normative design intent and engineering specification for the residential campus-grade network infrastructure of the principal residence of F.G.D. Robledo. It encompasses, without limitation, all hardware components, logical configurations, physical cabling, wireless radio parameters, performance targets, redundancy requirements, security postures, and operational procedures necessary for the deployment, commissioning, and long-term operation of said network.

The scope of this document extends across the entirety of the described compound, including all wings of the primary above-ground residential structure, all subterranean tunnel corridors and underground facility levels, all outbuilding structures within the compound perimeter, and any external relay points required to extend network connectivity beyond the perimeter of the principal building.

This document is intended for use by qualified network engineers, IT infrastructure specialists, low-voltage cabling contractors, and authorized personnel of the principal. It is not intended for general public distribution. All specifications contained herein represent the design-phase intent and shall govern all procurement decisions. Deviations from this specification, whether arising from component unavailability, regulatory constraint, or engineering discovery during deployment, shall be documented via a formal Engineering Change Notice (ECN) and appended to this document at the next applicable revision cycle.

NOTE
Throughout this document, the term "compound" shall refer to the totality of the residential campus, inclusive of all wings, underground levels, and ancillary structures. The term "wing" shall refer to a discrete, physically separated section of the primary structure, each served by its own Wing Core distribution switch and independent access layer. The term "floor" shall refer to a single horizontal level within a given wing.
§ 2
Normative References and Standards
Governing Technical Frameworks

The design, components, and configurations described in this specification conform to or incorporate, in whole or in relevant part, the following standards, specifications, and regulatory frameworks. Where a listed standard has been superseded or amended, the most recent published revision as of the date of this document shall apply.

TABLE 2-01 Normative Standards Reference Matrix
Standard / ReferenceGoverning BodyApplicabilityRelevance
IEEE 802.11be-2024IEEEMandatoryWi-Fi 7 (EHT) PHY/MAC layer specification; all wireless APs in this design
IEEE 802.11ax-2021IEEEInformativeWi-Fi 6E backward compatibility reference; OFDMA precedent
IEEE 802.3bt-2018IEEEMandatoryPoE++ (802.3bt Type 3/4) — governs all PoE switch and AP power delivery
IEEE 802.1Q-2022IEEEMandatoryVirtual LAN (VLAN) tagging; all switching infrastructure
IEEE 802.1AX-2020IEEEMandatoryLink Aggregation Control Protocol (LACP); uplink bonding
IEEE 802.3ae-2002IEEEMandatory10 Gigabit Ethernet; core and uplink fiber runs
IEEE 802.1D / 802.1wIEEEMandatorySpanning Tree Protocol (STP) / Rapid STP; loop prevention in switching fabric
RFC 5798 (VRRP v3)IETFMandatoryVirtual Router Redundancy Protocol v3; edge router HA
RFC 4271 (BGP-4)IETFMandatoryBorder Gateway Protocol; multi-ISP policy routing at edge
RFC 2328 (OSPFv2)IETFMandatoryOpen Shortest Path First; internal dynamic routing
RFC 4601 (PIM-SM)IETFAdvisoryProtocol Independent Multicast; future multicast media distribution
ITU-T G.984 / G.987ITU-TInformativeGPON / XG-PON; governs ISP ONT interface characteristics
TIA-568.2-DTIAMandatoryStructured cabling specification; Cat 6A horizontal runs
TIA-568.3-DTIAMandatoryFiber optic cabling; OS2 single-mode backbone specification
WPA3-Enterprise (SAE)Wi-Fi AllianceMandatoryWireless authentication; all secured SSIDs
Wi-Fi 7 CertificationWi-Fi AllianceMandatoryDevice certification baseline for all APs and clients
NTC MC 05-08-2020NTC (Philippines)MandatoryNational Telecommunications Commission frequency allocation; 6 GHz band local regulations
NFPA 70 / PEC 2017NFPA / PECMandatoryPhilippine Electrical Code; governs all electrical and PoE installation
All IEEE standards referenced above carry their most current published amendment as of April 2026.
§ 3
Definitions, Abbreviations, and Notation
Lexicon and Conventions

The following definitions, abbreviations, and notational conventions are established for use throughout this document. Unless otherwise specified contextually, these definitions shall apply in their entirety to all sections, tables, diagrams, and appendices.

ACL — Access Control List
A sequential list of permit and deny rules applied to network traffic at Layer 2 (MAC-based) or Layer 3 (IP-based) to control forwarding behavior and enforce security policy.
AP — Access Point
A wireless network device that creates one or more BSS/SSID service sets and provides IEEE 802.11 radio access to client stations. In this specification, all APs are IEEE 802.11be compliant and operate across four discrete frequency bands simultaneously.
BGP — Border Gateway Protocol
An exterior gateway protocol (EGP) operating at the edge routing layer to exchange reachability information between the compound network and multiple ISPs, enabling policy-based routing and path selection across three distinct upstream providers.
BSS — Basic Service Set
The fundamental building block of an 802.11 network, consisting of an AP and the set of client stations associated with it. Each SSID constitutes a logical BSS, and a single AP may host multiple BSSs concurrently.
EHT — Extremely High Throughput
The PHY designation for the IEEE 802.11be amendment, colloquially marketed as Wi-Fi 7. EHT extends maximum channel bandwidth to 320 MHz, maximum spatial streams to 16 per band, and maximum modulation to 4096-QAM.
EIRP — Effective Isotropic Radiated Power
The total RF power transmitted by an antenna system relative to an isotropic radiator, expressed in dBm. Regulatory limits govern maximum permissible EIRP per frequency band and per jurisdiction.
HA — High Availability
A design property of a system or subsystem in which redundant components and failover mechanisms ensure continuity of operation in the event of a single or multiple component failures, typically expressed as a percentage uptime (e.g., 99.999% = "five nines").
LACP — Link Aggregation Control Protocol
As defined in IEEE 802.1AX, a protocol that allows multiple physical Ethernet links to be combined into a single logical link, providing both increased bandwidth and link-level redundancy. Used extensively at uplink interfaces throughout this design.
MLO — Multi-Link Operation
A defining feature of IEEE 802.11be that allows a single logical connection between an AP and a client to simultaneously utilize multiple frequency bands and/or channels. MLO provides simultaneous transmission and reception across bands, dramatically reducing latency and increasing aggregate throughput compared to prior band-steering mechanisms.
MU-MIMO — Multi-User Multiple-Input Multiple-Output
A radio access technology in which an AP communicates with multiple client stations simultaneously using spatial multiplexing. IEEE 802.11be extends MU-MIMO support to 16 spatial streams and 8 simultaneous downlink users per transmission opportunity.
ONT — Optical Network Terminal
The subscriber-premises equipment that terminates the optical fiber from the ISP's central office or OLT, converting the optical signal to an Ethernet interface for connection to the compound network. Each ISP in this design provides one ONT.
OSPF — Open Shortest Path First
A link-state interior gateway routing protocol, as specified in RFC 2328, operating within the compound's internal routing domain to propagate route information between the edge routers and core switching layer.
PHY — Physical Layer
The lowest layer of the OSI model, governing the physical transmission of data over a medium. In the context of wireless networking, the PHY defines modulation scheme, coding rate, channel bandwidth, number of spatial streams, and resultant data rate.
PoE++ — Power over Ethernet (802.3bt Type 3/4)
An extension to the IEEE 802.3 standard enabling power delivery of up to 90 W (Type 4) over standard Cat 5e or better copper cabling, simultaneously with data transmission. Employed at all access switching layers to power Wi-Fi 7 access points without the need for separate power supply units or cabling.
QoS — Quality of Service
A set of mechanisms and policies applied to network traffic to differentiate treatment based on traffic class, ensuring that latency-sensitive or high-priority traffic (e.g., VR streaming, gaming, VoIP) receives preferential forwarding, queuing, and bandwidth allocation relative to bulk or background traffic.
SFP+ — Small Form-factor Pluggable Plus
A hot-pluggable optical or copper transceiver module supporting 10 Gbps data rates per port, used extensively at core and distribution uplink interfaces throughout this design.
SSID — Service Set Identifier
The human-readable name of a wireless network, broadcast by an AP to identify a specific logical wireless network. Multiple SSIDs may be instantiated per physical AP, each mapped to a distinct VLAN, security policy, and QoS class.
TWT — Target Wake Time
An IEEE 802.11ax/be feature that allows an AP and a client to negotiate specific times at which the client will be awake and active on the channel, enabling dramatic reductions in client power consumption for IoT and battery-powered devices without impacting latency for active sessions.
VLAN — Virtual Local Area Network
A Layer 2 segmentation mechanism, as specified in IEEE 802.1Q, that logically partitions a physical switching fabric into multiple isolated broadcast domains, each identified by a 12-bit VLAN identifier (VID). Employed throughout this specification to separate traffic classes, enforce security boundaries, and simplify network management.
VRRP — Virtual Router Redundancy Protocol
As specified in RFC 5798, a protocol that allows two or more physical routers to collectively present a single virtual IP address to downstream clients, providing transparent gateway redundancy. One router operates as MASTER while the other(s) operate in BACKUP state, with automatic promotion upon MASTER failure.
§ 4
Physical Compound Description
Site Characterization and Coverage Zones

The subject compound is a multi-wing residential campus of significant scale, designed to accommodate a large number of permanent and concurrent users across multiple functional zones. The compound is organized into four primary above-ground wings (designated Wing A through Wing D), an underground tunnel corridor system interconnecting all wings, and one or more subterranean facility levels beneath the primary structure. The following characterization of each zone governs the wireless coverage planning, cabling infrastructure routing, and access switch placement described in subsequent sections.

TABLE 4-01 Compound Zone Characterization and Network Infrastructure Requirements
Zone DesignationDescriptionEst. Area (m²)Est. FloorsDominant UseAP DensityUplink Req.
Wing APrimary residential wing; master suite, principal office, library, entertainment suites~8003–4High-density residential, VR, 8K streamingVery High (1 AP / ~40m²)2×10G (LACP)
Wing BSecondary residential wing; guest suites, common areas, dining~6003Mixed residential, IoT, casual useHigh (1 AP / ~55m²)2×10G (LACP)
Wing COperational wing; server room, workshop, design studio, lab spaces~5002–3High-bandwidth technical, server access, NASHigh (1 AP / ~50m²)2×10G (LACP)
Wing DRecreation wing; gymnasium, garage, VR arena, simulation bay~7002VR-primary, ultra-low latency, high concurrent clientsVery High (1 AP / ~35m²)2×10G (LACP)
Underground CorridorsTunnel network interconnecting wings; also houses primary conduit runs~4001 (below grade)Transit, cabling, emergency commsMedium (1 AP / ~80m²)1×10G
Subterranean FacilityBelow-grade facility; Shatterdome bay, mechanical, utility, storage~1200+2 (below grade)Operational, security monitoringHigh (1 AP / ~60m²)2×10G (LACP)
Exterior / PerimeterOutdoor areas, circuit track perimeter, helipad, gardens~5000+N/AOutdoor coverage, security cameras, perimeter commsLow (sector / area APs)1×2.5G per AP
Area estimates are approximate and based on preliminary compound layout planning. Actual AP counts shall be determined following final architectural drawings and RF survey.
§ 5
Network Architecture Overview
Hierarchical Design Model and Topology
§ 5.1 Design Philosophy

The network architecture described in this specification is designed according to the three-layer hierarchical network model, adapted and extended for campus-scale residential deployment. The three layers — Edge (WAN / Access), Core (Distribution), and Access (Edge) — map cleanly to the physical component hierarchy and provide a principled framework for scalability, troubleshooting, and expansion.

Every layer of this network is designed to survive the failure of any single component without degradation of user-facing service.

The guiding design principles, in order of priority, are as follows:

1. High Availability (HA) Above All. No single point of failure shall be permitted at any layer of the network that would result in a complete loss of connectivity. Redundancy is implemented at the ISP level (three providers), the edge router level (VRRP with dual physical routers), the core switching level (dual chassis, cross-connected), and at every distribution uplink (dual 10G LACP trunks). Even at the access layer, dual-homed APs to separate switches are specified where architectural feasibility permits.

2. Bandwidth Far in Excess of Current Demand. The network shall be provisioned for a bandwidth envelope substantially exceeding anticipated peak demand, providing headroom for future expansion, additional users, and as-yet-unanticipated high-bandwidth applications. A factor of not less than 3× overprovisioning at all aggregation points is the design target.

3. Wi-Fi 7 as the Universal Wireless Standard. No access point or wireless device below the IEEE 802.11be (Wi-Fi 7) specification shall be deployed as part of the primary wireless infrastructure. All APs shall support the quad-band radio configuration described in Section 11. Legacy device support shall be achieved via backward-compatible SSID configuration, not through the deployment of legacy hardware.

4. Enterprise Management and Observability. All network devices shall be centrally managed, with full telemetry streaming to a network management system (NMS). No device shall operate in a standalone, unmanaged configuration. This requirement applies equally to APs, switches, and routers.

§ 5.2 Hierarchical Layer Model
Layer 1 — WAN
ISP / ONT Layer
Three independent fiber ISP connections (PLDT 10G, Globe 1G, Converge 1G) terminate at separate ONTs. Aggregate WAN capacity: 12 Gbps. Provides upstream internet access, DNS, and NTP anchoring.
Layer 2 — Edge
Edge Routing — MikroTik CCR2004
Dual MikroTik CCR2004 routers in VRRP HA configuration. Handles NAT, BGP multi-ISP routing, stateful firewall, VLAN-aware routing, QoS marking, and VPN termination. 25G uplinks to core.
Layer 3 — Core
Central Core — Cisco Catalyst 9300X ×2
Dual 9300X in cross-connected topology. 480 Gbps per-chassis switching capacity. L2/L3 fabric spine. Distributes fiber uplinks to all wing cores. HSRP/VSS for gateway redundancy.
Layer 4 — Distribution
Wing Core — Cisco Catalyst 9300X-24Y + NM-4C per Wing
One C9300X-24Y + C9300X-NM-4C per wing. 2×100G QSFP28 MLAG to central cores (200G aggregate). 24 native 25G SFP28 downlinks to access switches (2×25G LACP per switch). Underground facility uses 2×25G MLAG uplink.
Layer 5 — Access
Floor Access — Ubiquiti Pro XG 24 PoE
One or more USW-Pro-XG-24-PoE per floor per wing. 16×10G + 8×2.5G PoE+++ RJ45 ports (100W/port). 2×25G SFP28 LACP uplinks to wing core (50G aggregate). Powers and connects all APs and wired endpoints on the floor.
Layer 6 — Wireless
Wi-Fi 7 APs — Asus ROG / ZenWiFi / ProArt
Dense deployment of Wi-Fi 7 quad-band APs (2.4 GHz + 5 GHz + 6 GHz Low + 6 GHz High). MLO-enabled, 320 MHz channels on 6 GHz, 4096-QAM, ≥19.4 Gbps theoretical aggregate per AP.
§ 5.3 Full Topology Diagram
WAN / ISP LAYER EDGE ROUTING LAYER CENTRAL CORE LAYER WING CORE / DISTRIBUTION LAYER ACCESS SWITCHING LAYER (PoE++) WIRELESS ACCESS POINT LAYER — IEEE 802.11be Wi-Fi 7 CLIENT DEVICES PLDT FIBER 10 Gbps Primary WAN · ONT-1 XGS-PON · 1.25 GB/s GLOBE FIBER 1 Gbps Secondary WAN · ONT-2 GPON · 125 MB/s CONVERGE FIBER 1 Gbps Tertiary WAN · ONT-3 GPON · 125 MB/s AGGREGATE WAN: 12 Gbps (1.5 GB/s) EDGE-RTR-01 (MASTER) MikroTik CCR2004-1G-12S+2XS 12×10G SFP+ · 2×25G SFP28 VRRP VIP: 192.168.1.1 · BGP AS:65001 EDGE-RTR-02 (BACKUP) MikroTik CCR2004-1G-12S+2XS 12×10G SFP+ · 2×25G SFP28 VRRP Priority: 90 · Hot-standby CORE-SW-01 Cisco Catalyst 9300X-24Y 24×25G SFP28 · 480 Gbps fabric L3 Core · HSRP Active CORE-SW-02 Cisco Catalyst 9300X-24Y 24×25G SFP28 · 480 Gbps fabric L3 Core · HSRP Standby 10G×2 XLINK WING-A-CORE Cat. 9300X-24UX 24×2.5G + 4×10G SFP+ Dual 10G uplinks WING A · Primary Res. WING-B-CORE Cat. 9300X-24UX 24×2.5G + 4×10G SFP+ Dual 10G uplinks WING B · Guest / Common WING-C-CORE Cat. 9300X-24UX 24×2.5G + 4×10G SFP+ Dual 10G uplinks WING C · Technical / Lab WING-D-CORE Cat. 9300X-24UX 24×2.5G + 4×10G SFP+ Dual 10G uplinks WING D · VR Arena / Rec. ACCSS-A-FL1…N UniFi Ent. 24 PoE 24×2.5G PoE++ (90W) 128Gbps · 2×10G uplink Per floor · LACP to wing core ACCSS-B-FL1…N UniFi Ent. 24 PoE 24×2.5G PoE++ (90W) 128Gbps · 2×10G uplink Per floor · LACP to wing core ACCSS-C-FL1…N UniFi Ent. 24 PoE 24×2.5G PoE++ (90W) 128Gbps · 2×10G uplink Per floor · LACP to wing core ACCSS-D-FL1…N UniFi Ent. 24 PoE 24×2.5G PoE++ (90W) 128Gbps · 2×10G uplink Per floor · LACP to wing core WING A · Wi-Fi 7 APs Asus ROG Rapture GT-BE98 Pro (Anchor) Asus ZenWiFi Pro ET12 Dense deployment APs IEEE 802.11be · Quad-Band MLO · 320 MHz · 4096-QAM ≥19.4 Gbps per AP (theoretical) WING B · Wi-Fi 7 APs Asus ROG Rapture GT-BE98 Pro (Anchor) Asus ZenWiFi Pro ET12 Dense deployment APs IEEE 802.11be · Quad-Band MLO · 320 MHz · 4096-QAM ≥19.4 Gbps per AP (theoretical) WING C · Wi-Fi 7 APs Asus ROG Rapture GT-BE98 Pro (Anchor) Asus ProArt WiFi 7 AP Technical zone APs IEEE 802.11be · Quad-Band MLO · 320 MHz · 4096-QAM ≥19.4 Gbps per AP (theoretical) WING D · Wi-Fi 7 APs Asus ROG Rapture GT-BE98 Pro (Anchor) Asus ZenWiFi Pro ET12 VR-optimized dense APs IEEE 802.11be · Quad-Band MLO · 320 MHz · 4096-QAM ≥19.4 Gbps per AP (theoretical) CLIENT ENDPOINTS (ALL WI-FI 7 MLO-CAPABLE OR LEGACY COMPATIBLE) Laptops / Workstations VR Headsets (standalone) Smartphones / Tablets Smart Home IoT Devices IP Cameras / Sensors VLAN 10/20 VLAN 30 · QoS-P1 VLAN 10/40 VLAN 50 · Isolated VLAN 60 · Monitoring LEGEND Primary fiber link (10G / 25G SFP+) Secondary / cross-connect link (redundant) Core cross-link (inter-chassis 10G×2 LACP) PoE++ uplink to AP (2.5G copper, Cat 6A) Wireless link (802.11be · dashed = RF) Distribution fiber (wing core → access, 10G) Core / routing device Access switching device (PoE++) IEEE 802.11be Wi-Fi 7 access point cluster NGRF-NET-001 REV. 1.0 — F.G.D. Robledo / NewGen — April 2026
FIGURE 5-01 Full Network Topology — Six-Layer Hierarchical Architecture — NGRF-NET-001
§ 6
ISP and WAN Layer
Multi-Provider Upstream Connectivity

The WAN layer constitutes the topological entry point of the compound network into the global internet. Three geographically and administratively distinct Internet Service Providers are engaged simultaneously, each delivering independent fiber connectivity to the compound. This multi-ISP architecture is the foundational prerequisite for the high availability posture of the overall network and is not considered an optional enhancement; it is a mandatory structural requirement.

TABLE 6-01 ISP Subscription and ONT Specifications
ProviderPlanDownloadUploadTechnologyONT InterfaceRoleMonthly Cost (PHP)
PLDTPLDT Fiber 10G10,000 Mbps10,000 MbpsXGS-PON (ITU-T G.9807.1)10G SFP+ or 10GBase-TPRIMARY~₱9,999–₱12,000
GlobeGlobe At Home GFiber MAX 1G1,000 Mbps500 MbpsGPON (ITU-T G.984)1GBase-T (RJ45)SECONDARY~₱2,499–₱3,799
ConvergeConverge FiberX 1G1,000 Mbps1,000 MbpsGPON (ITU-T G.984)1GBase-T (RJ45)TERTIARY~₱2,499–₱3,499
¹ Pricing is indicative as of Q1 2026 and subject to change. Actual contracted rates may vary. ² All three connections are provisioned as unlimited-data plans with no FUP cap.
§ 6.1 Rationale for Three-ISP Architecture

The deployment of three simultaneous ISP connections at this compound is motivated by a combination of uptime requirements, aggregate bandwidth demands, and the inherent unreliability of any single telecommunications provider operating in the Philippine market. The following analysis establishes the technical and economic justification.

Uptime and Availability. The compound network shall target a minimum service availability of 99.9% (three nines), corresponding to a maximum tolerated downtime of approximately 8.76 hours per calendar year. No single ISP in the Philippine residential market guarantees this level of availability by themselves. Independent failure modes across three separate provider networks (separate physical infrastructure, separate central offices, separate backbone providers) reduce the probability of a simultaneous outage across all three providers to a negligibly small figure, approaching 99.99% compound availability even with imperfect individual provider reliability figures.

Aggregate Throughput. The PLDT 10G plan alone provides 10 Gbps of WAN capacity, which represents an extraordinary amount of bandwidth for residential use. However, the addition of Globe and Converge provides 2 additional Gbps (for a total of 12 Gbps aggregate) at modest marginal cost relative to the total infrastructure investment. This additional capacity is relevant in failure scenarios (where the full 12 Gbps of demand is concentrated on 2 or even 1 surviving ISP), and for specific load-balancing scenarios where routing policy distributes traffic across all three simultaneously.

Routing Diversity. PLDT, Globe, and Converge each maintain distinct peering relationships and transit paths to major internet exchange points and content providers. For latency-sensitive traffic (gaming, VR, real-time communication), the edge routers can implement BGP policy to prefer the ISP offering the lowest-latency path to a given destination, regardless of which ISP carries the most traffic by volume.

RATIONALE
Three ISPs is not excessive for a campus-grade residential deployment of this scale. The marginal cost of Globe and Converge subscriptions, relative to the total networking infrastructure spend represented by this specification, is a rounding error. The uptime and routing policy benefits are substantial and unambiguous. A single ISP design would be an unjustifiable architectural risk.
§ 6.2 ONT Placement and Demarcation

Each ISP shall install its Optical Network Terminal (ONT) at the designated Network Demarcation Room (NDR), located in the main telecommunications intake of the compound, ideally adjacent to the primary network equipment room housing the edge routers. The demarcation point between ISP responsibility and compound network responsibility is defined as the Ethernet output port of each ONT. All equipment beyond this point, including the edge routers and all downstream infrastructure, is the property and responsibility of the compound owner.

The PLDT ONT shall be connected to the primary edge router via a direct 10G SFP+ fiber connection, requiring a compatible SFP+ transceiver module matched to the ONT's optical interface type. The Globe and Converge ONTs shall connect via 1GBase-T copper Ethernet to the 1G management port or a 10G/1G combo SFP+ port on the edge routers, as configured by the deploying engineer.

§ 7
Edge Routing Layer
MikroTik CCR2004-1G-12S+2XS — Dual HA Configuration

The edge routing layer constitutes the logical boundary between the external WAN environment and the internal compound network. It is responsible for all functions associated with WAN connectivity, network address translation, stateful packet inspection, inter-VLAN routing policy, and traffic engineering across multiple upstream providers. This layer is implemented as a dual-chassis High Availability pair, providing transparent failover in the event of any single router hardware failure.

Hardware Specification — MikroTik CCR2004-1G-12S+2XS
CPU ArchitectureMarvell Octeon III CN7130 · ARM64
CPU Cores / Frequency4 cores · 1.0 GHz
System RAM4 GB DDR4
Storage (NAND)128 MB
10G SFP+ Ports12× SFP+ (10G/1G combo)
25G SFP28 Ports2× SFP28 (25G/10G)
1G Management Port1× RJ45 GbE (out-of-band mgmt)
Total Throughput≥ 50 Gbps (tested)
Routing Performance (64B)≥ 10 Mpps
Power SupplyDual redundant PSU (AC)
Operating SystemRouterOS 7.x (latest stable)
Form Factor1U rack-mount · 443 × 232 × 44 mm
Quantity Deployed2× (MASTER + BACKUP)
RegulatoryCE, FCC, RoHS, REACH
§ 7.1 VRRP High Availability Configuration

The two MikroTik CCR2004 routers shall be configured as a Virtual Router Redundancy Protocol version 3 (VRRPv3, RFC 5798) pair. This configuration presents a single virtual IP address (192.168.1.1) to all downstream devices as the default gateway, whilst the active physical router handling traffic (designated MASTER) can be transparently replaced by the BACKUP router in the event of failure, without requiring any reconfiguration of downstream switches, APs, or client devices.

VRRP Configuration Parameters
VRRP VersionVRRPv3 (RFC 5798)
Virtual IP (VIP)192.168.1.1/24
MASTER Priority110 (EDGE-RTR-01)
BACKUP Priority90 (EDGE-RTR-02)
Advertisement Interval1 second
PreemptionEnabled (MASTER recovers automatically)
AuthenticationAH (MD5, shared key)
Failover Detection Time~3 seconds (3× advertisement timeout)
Traffic Impact on Failover< 3 seconds (TCP sessions re-established)
§ 7.2 BGP Multi-ISP Routing Policy

Border Gateway Protocol version 4 (BGP-4, RFC 4271) is employed at the edge layer to manage routing across three upstream ISP connections. Each ISP is assigned a distinct path weight and local preference value, implementing a deterministic primary/secondary/tertiary traffic routing hierarchy. Under normal operating conditions, all internet-bound traffic is routed via the PLDT 10G connection, capitalizing on its superior bandwidth. In the event of PLDT circuit failure, traffic is automatically diverted to the Globe 1G connection via BGP route withdrawal and re-advertisement. If both PLDT and Globe circuits fail, the Converge 1G circuit assumes all internet traffic. OSPF is used internally within the compound network for distributing routes between the edge and core layers.

TABLE 7-01 BGP Route Policy and Load Balancing Configuration
ISPBGP Local PreferenceWeightCondition for UseTraffic Share (Normal)Traffic Share (PLDT Down)
PLDT 10G3001000Circuit operational~83% (10G / 12G)0%
Globe 1G200500PLDT down, or policy route~8.3% (1G / 12G)~50%
Converge 1G100200PLDT + Globe down, or policy route~8.3% (1G / 12G)~50%
Load balancing percentages are nominal and vary with actual link utilization. Specific traffic classes (gaming, VR) may be policy-routed to the ISP offering lowest measured latency to the target destination regardless of the above hierarchy.
RATIONALE
Why MikroTik CCR2004? The CCR2004 represents the optimal intersection of raw routing throughput, port density, feature richness, and cost for this application. Its 12×10G SFP+ ports provide more than sufficient interface count for three ISP uplinks, two 25G cross-connects to the core switches, and several management and monitoring interfaces simultaneously. RouterOS 7 provides a fully-featured enterprise routing suite at a price point substantially below comparable Cisco or Juniper edge hardware, without any functional compromise relevant to this design. The dual redundant PSUs are mandatory; an edge router with a failed power supply due to a single non-redundant PSU would be an embarrassing and preventable single point of failure.
§ 8
Central Core Switching Layer
Cisco Catalyst 9300X — Dual-Chassis Enterprise Backbone

The central core switching layer constitutes the primary high-speed switching fabric of the compound network. Two Cisco Catalyst 9300X chassis, cross-connected and operating as a redundant pair, provide the electrical and optical distribution backbone to all wing core switches, the edge routers, and any directly-attached core infrastructure (servers, NAS, hypervisors, network management appliances). The switching capacity of this layer exceeds the aggregate of all conceivable traffic demands that could simultaneously arise from all downstream devices.

Hardware Specification — Cisco Catalyst 9300X-24Y + C9300X-NM-4C (per chassis)
Model DesignatorC9300X-24Y-A
Native Port Count24× 25G SFP28 (auto 1G/10G/25G)
Network Module (installed)C9300X-NM-4C — 4× QSFP28 (40G/100G dual-rate)
NM-4C Port Allocation4× 100G wing-core uplinks (1 per wing, Wings A–D)
Switching Capacity (per chassis)480 Gbps (full duplex)
Forwarding Rate357.14 Mpps
Switching ModeCut-through / Store-and-forward
DRAM4 GB (upgradeable to 32 GB)
Flash Storage16 GB
Layer 2/3 FeaturesFull L2 + L3 IP Services (IOS-XE)
Redundant PSUDual AC PSU (1100W each)
StackWiseStackWise-480 (up to 8 chassis, 480 Gbps stack ring)
SoftwareCisco IOS-XE (latest train)
ManagementYANG/NETCONF, RESTCONF, gRPC, SNMP v3, SSH v2
Quantity Deployed2× (CORE-SW-01 + CORE-SW-02)
Combined Core Fabric960 Gbps total switching capacity
Total 100G QSFP28 Ports (pair)8× (4 per chassis via NM-4C)
Inter-Chassis Link (ICL)2×25G SFP28 LACP — 50 Gbps aggregate (native ports)
§ 8.1 Cross-Connection and Redundancy Architecture

The two core switches are physically cross-connected via a dedicated inter-chassis link aggregate (ICL), implemented as a 2×25G SFP28 LACP bundle using the native SFP28 ports of the C9300X-24Y chassis, providing 50 Gbps of bidirectional inter-chassis bandwidth — a 2.5× improvement over a 2×10G ICL and commensurate with the increased per-wing uplink density delivered by the 100G QSFP28 wing-core links described in Section 9. This cross-connect enables real-time forwarding-table and VLAN-database synchronization between chassis, permits cross-chassis traffic forwarding without hairpinning through the edge layer, and provides the physical substrate required for HSRP and MLAG failover coordination.

Each wing core switch (see Section 9) terminates two 100G QSFP28 uplinks to the central core layer — one physical fiber to CORE-SW-01 and one to CORE-SW-02 — forming a Multi-Chassis Link Aggregation Group (MLAG) bundle presenting 200 Gbps of aggregate logical bandwidth per wing with full link-level redundancy. The central core switches each require one Cisco C9300X-NM-4C network module to provide the four QSFP28 100G ports necessary to terminate the uplinks of all four above-ground wing cores (Wings A–D, one 100G uplink per wing per core chassis). The underground facility wing core connects via 2×25G SFP28 LACP to the native SFP28 ports of both core chassis, consistent with its lower aggregate traffic profile. The edge routers connect to the core via the 25G SFP28 native ports of each core chassis, one 25G link per router per core.

CAUTION
Spanning Tree Protocol (Rapid PVST+ or Multiple Spanning Tree per IEEE 802.1s) shall be fully configured and validated across the core and wing core switching layers prior to production deployment. The cross-connected dual-core topology creates potential for switching loops if STP is not correctly implemented. Root bridge assignment shall be explicitly configured on CORE-SW-01 (primary) and CORE-SW-02 (secondary), with appropriate port cost and priority settings. MLAG peer-link configuration must be completed on both chassis before bringing up wing core uplinks.
§ 8.2 L3 Routing and Inter-VLAN Services

The central core switches operate as the primary Layer 3 routing engine for the internal compound network. All inter-VLAN routing (traffic passing between VLANs, e.g., a device on the Management VLAN initiating a connection to a device on the Trusted Client VLAN) is performed at the core switching layer via Switched Virtual Interfaces (SVIs). Each VLAN defined in Section 12 is assigned a corresponding SVI on both core switches, with HSRP providing a single virtual gateway IP for client devices regardless of which physical core switch is active. OSPF distributes the internal routing table between the core switches and the edge routers.

RATIONALE
Why Cisco Catalyst 9300X-24Y at the core? The C9300X-24Y is one of only three Catalyst 9300X chassis models compatible with the C9300X-NM-4C QSFP28 module — a hard compatibility constraint that makes this model selection non-negotiable. Its 480 Gbps switching fabric per chassis ensures the core layer will never be a bottleneck: even with all four wing 100G uplinks active simultaneously (4×100G = 400 Gbps total inbound), two chassis in MLAG provide 960 Gbps combined fabric capacity — a 2.4:1 headroom ratio at maximum theoretical wing load. Cisco IOS-XE provides industry-standard, mature, and extensively documented routing and switching features with a decades-long track record of stability in enterprise deployments. The 9300X family's native YANG/NETCONF and gRPC telemetry interfaces are essential for the management and monitoring strategy described in Section 19.
§ 9
Wing Core Distribution Layer
Cisco Catalyst 9300X-24Y + C9300X-NM-4C per Wing — 100G QSFP28 Uplink Architecture

The wing core distribution layer provides per-wing aggregation and distribution services, sitting logically between the central core switches and the floor-level access switches. Each discrete wing of the compound — Wing A (Primary Residential), Wing B (Guest and Common), Wing C (Technical and Operations), Wing D (Recreation and VR), and the Underground Facility — is served by a dedicated Cisco Catalyst 9300X-24Y distribution switch equipped with a C9300X-NM-4C network module. The C9300X-24Y was selected over the previously considered C9300X-24UX specifically because it is one of only three Catalyst 9300X models compatible with the C9300X-NM-4C QSFP28 module — the others being the C9300X-48HX and C9300X-48TX, both of which are unnecessarily port-dense for a wing distribution role. The C9300X-24Y's 24 native 25G SFP28 ports provide the downlink density required to serve all floor-level access switches at 2×25G LACP each, while the NM-4C's four QSFP28 ports simultaneously provide 100G uplinks to the central core pair, without port-group contention.

This one-to-one correspondence between wings and distribution switches ensures that each wing's traffic is isolated, managed, and routed independently, preventing a single access layer event from affecting other wings. The 100G QSFP28 uplink architecture introduced at this tier represents a step-change in inter-layer bandwidth: each wing presents 200 Gbps of logical uplink capacity to the central core via a 2-member Multi-Chassis Link Aggregation Group (MLAG), compared to 20 Gbps in a conventional 2×10G design — a tenfold increase that eliminates the distribution-to-core link as any conceivable bottleneck for the lifetime of this infrastructure.

§ 9.1 C9300X-NM-4C Module — Technical Detail

The Cisco C9300X-NM-4C is a field-replaceable, hot-swappable network expansion module providing four QSFP+/QSFP28 ports, each capable of operating at either 40 Gigabit Ethernet (40GBASE-SR4, 40GBASE-LR4) or 100 Gigabit Ethernet (100GBASE-SR4, 100GBASE-LR4, 100GBASE-CWDM4, 100GBASE-PSM4), auto-negotiated based on the installed transceiver. The module provides up to 400 Gbps of raw port bandwidth and integrates natively with the UADP 2.0sec ASIC of the C9300X, enabling line-rate switching and hardware-accelerated IPsec at 100G speeds. In this deployment, the NM-4C is configured in 100G mode exclusively, with 100GBASE-LR4 QSFP28 transceivers installed for compatibility with the existing OS2 single-mode fiber plant.

C9300X-NM-4C — Module Specification
Part NumberC9300X-NM-4C=
Port Count4× QSFP+/QSFP28 (hot-swappable transceivers)
Supported Speeds40G (40GBASE-SR4/LR4) and 100G (100GBASE-SR4/LR4/CWDM4/PSM4)
Max Bandwidth per Module400 Gbps (4× 100G)
Transceiver (this design)100GBASE-LR4 QSFP28 (10 km, OS2 single-mode, LC Duplex)
Compatible ChassisC9300X-24Y, C9300X-48HX, C9300X-48TX only
Hot-SwapYes — field-replaceable without switch downtime
LACP / Port-ChannelFull LACP 802.1AX support; cross-chassis MLAG capable
Port Allocation — wing coresPort 1 → CORE-SW-01 (100G MLAG leg); Port 2 → CORE-SW-02 (100G MLAG leg); Ports 3–4 spare
Port Allocation — central coresPort 1 → Wing A (100G); Port 2 → Wing B (100G); Port 3 → Wing C (100G); Port 4 → Wing D (100G)
LACP Mode (wing-to-core)MLAG cross-chassis: Port 1 + Port 2 form a 200G LACP bundle per wing
§ 9.2 Wing Core — Full Hardware Specification
Hardware Specification — Cisco Catalyst 9300X-24Y + C9300X-NM-4C (per wing)
Base ModelC9300X-24Y-A
Native Ports24× SFP28 (auto 1G / 10G / 25G)
Native Port UsageAccess switch downlinks — 2×25G LACP per access switch (up to 12 access switches per wing)
Network Module (NM Slot)C9300X-NM-4C — 4× QSFP28 100G/40G dual-rate
Uplink to CORE-SW-011× 100G QSFP28 (NM-4C port 1) — 100GBASE-LR4
Uplink to CORE-SW-021× 100G QSFP28 (NM-4C port 2) — 100GBASE-LR4
Effective Uplink (MLAG)200 Gbps aggregate (cross-chassis MLAG, 2×100G)
NM-4C Ports 3–4Unloaded — reserved for future expansion
Switching Capacity480 Gbps (full duplex)
Forwarding Rate357.14 Mpps
Layer 2/3Full L2 + L3 IP Services (IOS-XE)
Redundant PSUDual AC PSU (1100W each, hot-swap)
Quantity Deployed5× (Wings A, B, C, D + Underground Facility)
§ 9.3 Wing-to-Core MLAG Uplink Architecture

The wing-to-core uplink design employs a cross-chassis Multi-Chassis Link Aggregation Group (MLAG) pattern, a well-established enterprise design in which two physical links from a single source device terminate on two different core chassis, forming a single logical aggregated link. From the wing core's perspective, the two 100G uplinks (one to CORE-SW-01, one to CORE-SW-02) appear as a single Port-Channel interface of 200 Gbps. From the central core pair's perspective, MLAG coordination via the ICL (inter-chassis link) ensures that both chassis present a unified LACP partner to the wing core. The result is that any single core chassis failure causes zero connectivity loss to any wing: the surviving 100G link continues to carry full traffic, and the MLAG subsystem on the surviving core chassis promotes itself to sole active peer within sub-second convergence.

The underground facility (UG-CORE) is a partial exception: it connects to the central core via 2×25G SFP28 LACP (one 25G to each central core chassis, cross-chassis MLAG, 50G aggregate), using the native SFP28 ports of the C9300X-24Y. This reflects the lower aggregate traffic demand of the underground zone (transit corridors, monitoring, security cameras) and reserves the NM-4C QSFP28 ports of the UG-CORE for future expansion capacity. The 50G aggregate uplink for the underground zone exceeds its realistic peak demand by a factor of at least 3× at any foreseeable loading level.

RATIONALE
Why 100G QSFP28 uplinks rather than 2×25G or 4×10G aggregates? Operating the NM-4C at 100G per port is mandated for two reasons. First, 200G MLAG aggregate per wing matches the realistic downstream pressure: up to 12 access switches × 50G (2×25G LACP) = 600G total access bandwidth per wing, making 200G a conservative 3:1 oversubscription ratio — standard enterprise practice. Second, 100GBASE-LR4 uses the existing OS2 single-mode fiber plant throughout this design without any media changes, making the upgrade completely non-disruptive from a physical plant perspective. A 4×25G LACP alternative would consume four NM-4C ports to equal the same 100G bandwidth, leaving no spare ports — and 4×25G LACP provides no latency or path-diversity benefit over a single 100G link. The 100G single-link approach is architecturally cleaner and preserves two spare QSFP28 ports per wing core for future capacity expansion.
TABLE 9-01 Wing Core Switch Assignment and Uplink Configuration
Switch IDWing / ZoneModel + ModuleUplink to CORE-SW-01Uplink to CORE-SW-02MLAG AggregateDownlink to Access SwitchesEst. # Access Switches
WING-A-COREWing A (Primary Res.)C9300X-24Y + NM-4C1×100G QSFP28 (NM-4C)1×100G QSFP28 (NM-4C)200G MLAG2×25G SFP28 LACP per floor switch (native ports)4–6
WING-B-COREWing B (Guest / Common)C9300X-24Y + NM-4C1×100G QSFP28 (NM-4C)1×100G QSFP28 (NM-4C)200G MLAG2×25G SFP28 LACP per floor switch (native ports)3–5
WING-C-COREWing C (Technical / Lab)C9300X-24Y + NM-4C1×100G QSFP28 (NM-4C)1×100G QSFP28 (NM-4C)200G MLAG2×25G SFP28 LACP per floor switch (native ports)3–4
WING-D-COREWing D (VR / Recreation)C9300X-24Y + NM-4C1×100G QSFP28 (NM-4C)1×100G QSFP28 (NM-4C)200G MLAG2×25G SFP28 LACP per floor switch (native ports)2–4
UG-COREUnderground FacilityC9300X-24Y + NM-4C1×25G SFP28 (native port)1×25G SFP28 (native port)50G MLAG2×25G SFP28 LACP per zone switch (native ports)2–3
All uplinks use OS2 single-mode fiber (LC Duplex). Wings A–D uplinks use 100GBASE-LR4 QSFP28 transceivers. Underground and all access switch downlinks use 25GBASE-LR SFP28 transceivers. NM-4C ports 3–4 on all wing cores remain unloaded as reserved expansion capacity. Exact access switch counts subject to final RF survey.
§ 10
Access Switching Layer
Ubiquiti UniFi Switch Pro XG 24 PoE — 25G-Uplink Floor-Level Distribution

The access switching layer is the point at which the structured cabling infrastructure of each floor connects to the network. All wired endpoints — access points, wired workstations, IP cameras, rack-mounted servers, smart home controllers, and other network-attached devices — physically terminate at access layer switches. For the purposes of this specification, the Ubiquiti UniFi Switch Pro XG 24 PoE (USW-Pro-XG-24-PoE) is the designated access switch for all floor-level deployments. This model was selected over the older USW-Enterprise-24-PoE for a critical architectural reason: it provides two native 25G SFP28 uplink ports, enabling a 2×25G LACP aggregate (50 Gbps) to the wing core's C9300X-24Y native 25G SFP28 downlink ports — a 2.5× increase in per-switch uplink bandwidth compared to the 2×10G SFP+ available on the legacy model. This upgrade ensures the access-to-distribution uplink does not become the bottleneck in a switching hierarchy now delivering 200G at the distribution-to-core tier.

The USW-Pro-XG-24-PoE also improves the downlink port configuration: its sixteen 10GbE RJ45 PoE+++ ports (auto-sensing at 100M/1G/2.5G/5G/10G) allow AP connections to operate at 2.5G for current Wi-Fi 7 APs while providing an in-place upgrade path to 5G or 10G wired uplinks as future AP hardware demands higher single-port throughput — without any switch replacement. All 24 RJ45 ports deliver IEEE 802.3bt Type 4 PoE+++ at up to 100W per port, exceeding the maximum draw of any AP in the specified wireless layer and providing substantial headroom for next-generation access point hardware.

Hardware Specification — Ubiquiti UniFi Switch Pro XG 24 PoE (USW-Pro-XG-24-PoE)
Model SKUUSW-Pro-XG-24-PoE
10G RJ45 PoE+++ Ports16× 10GBase-T (auto 100M/1G/2.5G/5G/10G) — IEEE 802.3bt Type 4
2.5G RJ45 PoE+++ Ports8× 2.5GBase-T (auto 10M/100M/1G/2.5G) — IEEE 802.3bt Type 4
PoE StandardIEEE 802.3bt Type 4 (PoE+++) — up to 100W per port all RJ45
Total PoE Budget720W
SFP28 Uplink Ports2× 25G SFP28 (auto 1G/10G/25G) — uplink to wing core
Uplink LACP Config2×25G SFP28 LACP → C9300X-24Y native 25G SFP28 ports
Effective Uplink Bandwidth50 Gbps aggregate (2×25G LACP)
Switching Capacity460 Gbps (non-blocking)
Non-Blocking Throughput230 Gbps
Forwarding Rate342 Mpps
LayerLayer 3 (inter-VLAN routing, static routing, DHCP server)
Layer 2 FeaturesIEEE 802.1Q VLAN, LACP, RSTP, IGMP Snooping, QoS (8 queues), ACLs
ManagementUniFi Network Controller (local + cloud), SNMP v3, SSH, 1.3" LCM color touchscreen
EtherlightingYes — per-port illumination for visual port-status identification
Form Factor1U rack-mount · 442 × 327 × 44 mm
Max Power Draw~800W (switch + maximum PoE load)
§ 10.1 Intra-Wing Uplink Bandwidth Analysis

The access switch presents 50 Gbps of aggregate uplink bandwidth to the wing core via its 2×25G SFP28 LACP bundle. The aggregate downlink capacity of its 24 RJ45 ports — sixteen at 10G and eight at 2.5G — reaches a theoretical maximum of 180 Gbps if all ports were simultaneously saturated at their ceiling speeds. In practice, Wi-Fi 7 APs operate at 2.5G on current hardware, yielding a realistic concurrent aggregate of 60 Gbps (24 ports × 2.5G), producing a downlink-to-uplink oversubscription ratio of 1.2:1 — effectively non-blocking for the access layer. Even when 10G port capacity is considered against realistic concurrent wireless load, the 50G uplink remains more than adequate. As future Wi-Fi 8 access points with 10G wired interfaces begin to enter service, the 10G RJ45 ports of the USW-Pro-XG-24-PoE accommodate them natively, and the 50G LACP uplink remains appropriate at a 3.2:1 oversubscription ratio for mixed-speed access deployments.

At the wing level, up to twelve access switches connect to a single wing core via 2×25G LACP each, presenting a maximum theoretical aggregate of 12 × 50G = 600G of downlink-facing bandwidth against the wing core's 200G MLAG uplink — a 3:1 distribution-tier oversubscription ratio, which is standard enterprise practice and appropriate given that no real-world deployment will simultaneously saturate all floors at their maximum port speeds.

RATIONALE
Why upgrade from USW-Enterprise-24-PoE to USW-Pro-XG-24-PoE? The original USW-Enterprise-24-PoE is limited to 2×10G SFP+ uplinks (20G aggregate) and a 124 Gbps switching fabric. In the context of a hierarchy now delivering 200G at the distribution tier, a 20G access uplink creates a 10:1 bottleneck that negates every upstream bandwidth investment. The USW-Pro-XG-24-PoE's 2×25G SFP28 uplinks (50G aggregate) and 460 Gbps switching fabric maintain architectural coherence throughout the hierarchy. The per-unit price premium over the Enterprise model is modest relative to total infrastructure investment, and is unambiguously justified by the 2.5× uplink improvement, the more capable 10G downlink ports, the higher 720W PoE budget, and the superior 460 Gbps switching fabric. The 25G SFP28 uplinks terminate directly on the wing core C9300X-24Y's native 25G ports, requiring only 25GBASE-LR SFP28 transceivers and the existing OS2 fiber plant — no additional modules or cabling changes.
§ 11
IEEE 802.11be Wi-Fi 7 Wireless Layer
Quad-Band EHT Wireless Infrastructure — Multi-Link Operation
§ 11.1 Wi-Fi 7 Standard Synopsis (IEEE 802.11be)

IEEE 802.11be, commercially designated Wi-Fi 7 and technically classified as Extremely High Throughput (EHT), represents the seventh major revision of the 802.11 wireless LAN standard and constitutes the most significant advancement in wireless networking since the introduction of OFDMA in 802.11ax (Wi-Fi 6). The standard was ratified in 2024 and defines a comprehensive set of PHY and MAC layer enhancements that collectively enable theoretical maximum aggregate throughputs of up to 46 Gbps per access point. The following table summarizes the key PHY parameters that define the 802.11be standard and distinguishes it from its predecessors.

TABLE 11-01 IEEE 802.11be (Wi-Fi 7) PHY Parameter Comparison vs. Predecessors
ParameterWi-Fi 5 (802.11ac)Wi-Fi 6/6E (802.11ax)Wi-Fi 7 (802.11be)Improvement vs Wi-Fi 6E
Standard Designation802.11ac802.11ax802.11be
Technical NameVHTHEEHT
Max Channel Width160 MHz160 MHz320 MHz2× channel width
Max Modulation256-QAM1024-QAM4096-QAM4× modulation density
Max Coding Rate5/65/65/6Equal
Max Spatial Streams (per band)88162× per band
Multi-Link Operation (MLO)NoNoYes (defining feature)Revolutionary
Multi-RU (Resource Unit)NoNoYesNew
Max Theoretical Rate (single band)6.9 Gbps9.6 Gbps23.1 Gbps2.4× improvement
Max Theoretical Rate (4-band total)N/AN/A (tri-band only)~46+ GbpsN/A (new capability)
OFDMANoYes (DL/UL)Yes (DL/UL, enhanced)Enhanced
MU-MIMO4×4 DL only8×8 DL + UL16×16 DL + UL2× streams
Frequency Bands2.4 / 5 GHz2.4 / 5 / 6 GHz2.4 / 5 / 6 GHz (×2)+1 additional 6 GHz band
Target Wake Time (TWT)NoYesYes (enhanced)Enhanced
§ 11.2 Quad-Band Radio Architecture

The quad-band radio architecture is the defining characteristic of the AP platform specified in this document and the primary justification for the selection of Wi-Fi 7 hardware. Whereas prior Wi-Fi 6E routers and APs operated across three bands (2.4 GHz, 5 GHz, and 6 GHz), the quad-band Wi-Fi 7 platforms specified herein add a second, independent 6 GHz radio, operating in a distinct non-overlapping frequency segment of the 6 GHz band. This configuration provides four simultaneous, independent radio interfaces per physical AP unit.

2.4 GHz Up to 40 MHz BW
2.4 Gbps max
Long range
5 GHz Up to 160 MHz BW
5.8 Gbps max
Mid range
6 GHz — Low Up to 320 MHz BW
11.5+ Gbps max
Short-mid range
6 GHz — High Up to 320 MHz BW
11.5+ Gbps max
Short-mid range
TABLE 11-02 Quad-Band Radio Configuration — Per-Band PHY Parameters
RadioFrequency RangeMax Channel BWMax Spatial StreamsMax ModulationMax PHY RatePrimary Use CaseRegulatory Notes
Radio 1 — 2.4 GHz2.400 – 2.500 GHz40 MHz4×4 MIMO4096-QAM (EHT)~1.1 GbpsLegacy device support, IoT, long-range backhaul reachPH: 100 mW EIRP max; channels 1/6/11 non-overlapping
Radio 2 — 5 GHz5.150 – 5.850 GHz160 MHz4×4 MIMO4096-QAM (EHT)~5.8 GbpsGeneral client association, mid-throughput devicesPH: DFS required channels 52–144; EIRP limits per NTC
Radio 3 — 6 GHz Low5.925 – 6.425 GHz320 MHz4×4 MIMO4096-QAM (EHT)~11.5 GbpsHigh-throughput clients, VR primary, gamingPH: NTC MC 05-08-2020; Low Power Indoor (LPI) mode
Radio 4 — 6 GHz High6.425 – 7.125 GHz320 MHz4×4 MIMO4096-QAM (EHT)~11.5 GbpsHigh-throughput clients, AP backhaul, dedicated VR bandPH: Subject to NTC allocation; Very Low Power (VLP) or LPI
¹ Maximum PHY rates shown are for 4×4 MIMO with 4096-QAM, 5/6 coding rate, and maximum channel width. Real-world throughput will be lower. ² 6 GHz availability in the Philippines is subject to NTC regulatory confirmation; the design assumes LPI operation as currently enacted.
NOTE
The 6 GHz band (both Low and High segments) is a clean spectrum resource in the Philippine market. It carries no legacy Wi-Fi 4/5 devices, no Bluetooth overlap, and no microwave oven interference. This makes the 6 GHz band the optimal medium for latency-sensitive, high-throughput applications including VR streaming. The two independent 6 GHz radios on the quad-band AP allow simultaneous serving of multiple client groups at full 320 MHz channel width without inter-client resource contention.
§ 11.3 Multi-Link Operation (MLO) — Architecture and Benefits

Multi-Link Operation (MLO) is the singular most important technical feature introduced in IEEE 802.11be and the central reason that Wi-Fi 7 represents a qualitative, rather than merely quantitative, advancement over Wi-Fi 6E for the applications targeted by this specification. MLO fundamentally changes the relationship between a Wi-Fi client device and an access point by allowing what appears to the application layer as a single network connection to simultaneously utilize multiple frequency bands and channels at the PHY and MAC layers.

Prior Art — Band Steering (Pre-MLO)

Prior to Wi-Fi 7, a multi-band AP could make available multiple SSIDs (or a single SSID on multiple bands), and a client device would associate with exactly one band at a time. Sophisticated firmware could implement "band steering" to coax clients from the congested 2.4 GHz band to the less congested 5 GHz or 6 GHz bands, but this was a soft advisory mechanism only, and the client could ignore it. More critically, even with band steering, a client had a single active radio association — if that band experienced interference or congestion, the client's performance suffered, and re-association to another band incurred observable latency (typically 50–300 ms) due to the need to complete a new association handshake.

MLO Operation

With MLO in 802.11be, a client device capable of MLO operation (designated an MLO Multi-Link Device, or MLD) establishes a single logical link with the AP's MLD entity that simultaneously encompasses two or more physical RF links across different bands. Traffic can be dynamically scheduled across any active link simultaneously, with the MAC layer handling all multiplexing transparently. The benefits are profound: effective throughput is the aggregate of all participating links; if one link suffers interference or congestion, traffic is dynamically shifted to other links with sub-millisecond latency; effective round-trip time is reduced because packets can always be transmitted on whichever link offers the earliest transmission opportunity; and reliability is substantially improved because a burst of interference on one band cannot cause packet loss if alternative links are available.

TABLE 11-03 MLO Configuration and Performance Parameters
MLO ParameterValue / SettingNotes
MLO Bands in Use5 GHz + 6 GHz Low + 6 GHz High (3-link MLO)2.4 GHz excluded from MLO for latency reasons (wider OFDM symbol duration)
Primary Link (Anchor)6 GHz Low (Radio 3)Lowest latency link; preferred for latency-critical traffic classes
Secondary Links5 GHz (Radio 2) + 6 GHz High (Radio 4)Load-balanced for throughput; failover to primary if needed
Max Aggregate MLO Throughput~18.4 Gbps (5G: 5.8 + 6GL: 11.5 + 6GH: 1.1)Theoretical; real-world ~40–60% of theoretical
MLO Latency Target≤ 2 ms per link (intra-wing)MLO aggregate latency ≤ 1 ms effective (best available link)
MLO Client RequirementWi-Fi 7 MLO-capable client (MLD)Legacy Wi-Fi 6/5 clients associate normally on a single band
Traffic Steering in MLOAP-directed, dynamic per-packet schedulingLow-latency traffic prioritized to least-congested, lowest-latency link
MLO ModeEnhanced MLO (eMLSR + STR modes supported)STR (Simultaneous Transmit and Receive) preferred for max throughput
§ 11.4 Access Point Specifications
§ 11.4.1 — Anchor AP: Asus ROG Rapture GT-BE98 Pro

One Asus ROG Rapture GT-BE98 Pro is designated as the anchor access point per floor or zone. The anchor AP serves as the primary high-capacity node for that zone, handling the densest client load and providing the reference BSSID for roaming coordination within the zone.

Asus ROG Rapture GT-BE98 Pro — Wireless Specification
Wi-Fi StandardIEEE 802.11be (Wi-Fi 7)
Band ConfigurationQuad-Band: 2.4G + 5G + 6G-Low + 6G-High
2.4 GHz Radio4×4 MIMO · 40 MHz · Up to 688 Mbps (EHT)
5 GHz Radio4×4 MIMO · 160 MHz · Up to 5,765 Mbps
6 GHz Low Radio4×4 MIMO · 320 MHz · Up to 11,530 Mbps
6 GHz High Radio4×4 MIMO · 320 MHz · Up to 11,530 Mbps
Total Aggregate ThroughputUp to ~19.4–25 Gbps (theoretical)
MLO SupportYes — Multi-Link Operation (3-link MLO)
MU-MIMOUp to 8×8 per band (4×4 per radio on this model)
Wired Ports1× 10G WAN, 1× 10G LAN (SFP+), 4× 2.5G LAN
USB1× USB 3.0
PoE Power Input802.3bt PoE++ (up to 90W via RJ45)
ProcessorQuad-core 2.0 GHz (Broadcom / Qualcomm SoC)
RAM / Flash2 GB DDR4 / 256 MB NAND
OFDMAYes — UL + DL, all bands
BeamformingImplicit + Explicit (per 802.11be)
Target Wake Time (TWT)Yes (Enhanced TWT, 802.11be)
802.11k/r/vYes (roaming assistance, Fast BSS Transition)
WPA3 SecurityWPA3-Personal (SAE), WPA3-Enterprise (192-bit)
ManagementAsus Router App, ASUS AiMesh, SNMP
Dimensions395 × 395 × 84 mm (ceiling or wall mount)
§ 11.4.2 — Dense Deployment AP: Asus ZenWiFi Pro ET12

The Asus ZenWiFi Pro ET12 is deployed as the primary density-filling access point throughout all residential and mixed-use zones. Multiple ET12 units are deployed per floor at spacing intervals determined by RF modeling and site survey, supplementing the anchor GT-BE98 Pro and ensuring comprehensive, overlap-redundant coverage throughout each zone.

Asus ZenWiFi Pro ET12 — Wireless Specification
Wi-Fi StandardIEEE 802.11be (Wi-Fi 7)
Band ConfigurationTri-Band (EHT): 2.4G + 5G + 6G · (Quad-band via firmware on Pro variant)
2.4 GHz4×4 · 40 MHz · Up to 688 Mbps
5 GHz4×4 · 160 MHz · Up to 5,765 Mbps
6 GHz4×4 · 320 MHz · Up to 11,530 Mbps
MLOYes — 802.11be Multi-Link Operation
Wired1× 10G SFP+ · 1× 10G RJ45 · 2× 2.5G RJ45
PoE802.3bt PoE++ (90W)
Primary RoleHigh-density fill AP — residential / mixed zones
§ 11.4.3 — Technical Zone AP: Asus ProArt Wi-Fi 7 AP

Wing C (Technical / Lab) utilizes the Asus ProArt series Wi-Fi 7 AP, optimized for workstation and creative-professional clients with larger attached files, NAS access, and high-sustained-throughput requirements. The ProArt aesthetic and management profile integrates cleanly with the technical character of Wing C.

§ 11.5 Coverage and Density Planning

AP placement is governed by three requirements simultaneously: coverage (every point in the compound receiving a usable signal from at least one AP), redundant overlap (every point receiving an adequate signal from at least two APs, enabling seamless roaming without coverage gaps), and capacity (sufficient APs deployed per zone such that no single AP serves more clients than its OFDMA/MU-MIMO scheduling engine can simultaneously serve at acceptable per-client throughput).

TABLE 11-04 Wireless Coverage Planning — AP Density Targets by Zone Type
Zone TypeTarget AP SpacingTarget Client DensityMin. RSSI at Cell EdgePrimary Band (MLO Anchor)Handoff Protocol
VR Arena / Wing D1 AP per ~35 m²≤ 20 clients/AP−60 dBm (6 GHz)6 GHz Low (320 MHz)802.11r Fast BSS + 802.11k Neighbor Reports
Wing A Primary Residential1 AP per ~40 m²≤ 15 clients/AP−65 dBm (6 GHz)6 GHz Low / High (MLO)802.11r + 802.11v BSS Transition
Wing B Guest / Common1 AP per ~55 m²≤ 25 clients/AP−67 dBm (5 GHz)5 GHz + 6 GHz (MLO)802.11r + 802.11k
Wing C Technical1 AP per ~50 m²≤ 10 clients/AP−65 dBm (6 GHz)6 GHz High (320 MHz)802.11r (wired-like roaming)
Underground Corridors1 AP per ~80 m²≤ 10 clients/AP−70 dBm (2.4 GHz)2.4 GHz (long range) + 5 GHz802.11r
Exterior PerimeterSector APs at perimeter points≤ 30 clients/AP−72 dBm (5 GHz)5 GHz802.11k neighbor-guided
RATIONALE
VR Streaming Latency Requirements. Standalone VR headsets (e.g., Meta Quest, Sony PlayStation VR2 wireless, HTC Vive XR) require end-to-end wireless latency of ≤ 7 ms to prevent motion sickness and maintain visual coherence at high refresh rates (90–120 Hz). The 6 GHz band's clean spectrum and 320 MHz channel width, combined with MLO's ability to deliver packets on the first available link without contention, enables intra-wing wireless RTT of 1–3 ms under typical load conditions — well within the VR headset tolerance. The dense AP spacing in Wing D (1 per 35 m²) ensures that no VR client is ever more than approximately 3 meters from an AP, maximizing RSSI and minimizing the path loss that would otherwise reduce achievable MCS index and effective throughput.
§ 12
VLAN Architecture and Segmentation
IEEE 802.1Q Network Segmentation Schema

The compound network is segmented into distinct VLANs to enforce security boundaries, isolate traffic classes, simplify access control policy, and optimize broadcast domain size. Each VLAN is assigned a dedicated IPv4 subnet from the private address space (10.0.0.0/8) and carries an associated SSID for wireless clients that belong to that segment. All inter-VLAN traffic routing is performed at the core switching layer via SVIs as described in Section 8.2.

TABLE 12-01 VLAN Definition and Configuration Schema — NGRF-NET-001
VLAN IDVLAN NameIPv4 SubnetGateway (HSRP VIP)DHCP Pool RangeSSID MappingSecurity TierDescription
1NATIVE (Mgmt)10.0.1.0/2410.0.1.110.0.1.10–254None (wired only)★★★★★ CriticalNetwork management — switches, routers, APs. Isolated from all user VLANs. SSH/SNMP access only.
10TRUSTED-PRIVATE10.0.10.0/2310.0.10.110.0.10.10–510NGRF-PRIVATE★★★★ High TrustPrincipal and family devices. Full internal and WAN access. QoS Priority 1 for VR and gaming.
20WORKSTATION10.0.20.0/2410.0.20.110.0.20.10–250NGRF-WORK★★★★ High TrustDesktop workstations, NAS clients, creative workstations. Full LAN access. WAN access permitted.
30VR-PRIORITY10.0.30.0/2410.0.30.110.0.30.10–200NGRF-VR★★★ Medium TrustVR headsets and gaming consoles. Strict QoS Priority 1 (VR) / Priority 2 (gaming). Low-latency path enforced.
40GUEST-WIFI10.0.40.0/2310.0.40.110.0.40.10–510NGRF-GUEST★★ Low TrustGuest wireless clients. Internet access only. No inter-VLAN routing to any internal segment. Rate-limited per client.
50IOT-ISOLATED10.0.50.0/2310.0.50.110.0.50.10–510NGRF-IOT★ Very Low TrustSmart home IoT devices (thermostats, lighting, appliances). Internet access only (for cloud services). Strict ACL: no LAN access.
60SECURITY-CAM10.0.60.0/2410.0.60.110.0.60.10–200NGRF-CCTV (hidden)★★★ Medium TrustIP security cameras, NVR. Inbound to NVR only. No internet access. ACL: camera to NVR only.
70SERVER-LAN10.0.70.0/2410.0.70.1Static onlyNone (wired only)★★★★★ CriticalServers, NAS, hypervisors, NMS. Strictly controlled inbound access from VLAN 10/20 only. No DHCP (all static IPs).
80VOIP-QoS10.0.80.0/2410.0.80.110.0.80.10–200None / wired★★★ Medium TrustVoIP handsets, intercom systems. DSCP EF marking enforced. Strict priority queuing at all switch layers.
90DMZ10.0.90.0/2810.0.90.1Static onlyNone (wired only)★★★★ High TrustExternally-accessible services (web server, game server, VPN endpoint). Stateful firewall between DMZ and all internal VLANs.
99QUARANTINE10.0.99.0/2410.0.99.110.0.99.10–250Dynamic (enforcement)☆ UntrustedDynamically assigned by 802.1X / NAC to devices that fail authentication or compliance checks. Internet access only; blocked from all internal resources.
§ 13
Quality of Service (QoS) Framework
Traffic Classification, Marking, and Queuing Policy

A comprehensive Quality of Service framework is implemented across all layers of the compound network — from the edge routers through the core and wing switches to the wireless APs. The QoS framework ensures that latency-sensitive, mission-critical traffic classes receive guaranteed minimum bandwidth and maximum latency treatment, even during periods of network congestion. The framework follows the DiffServ (Differentiated Services) model, using DSCP markings applied at the edge and honored throughout the switching and routing fabric.

TABLE 13-01 Traffic Classification, DSCP Marking, and Queue Assignment
Traffic ClassApplicationsDSCP ValuePer-Hop BehaviorWFQ QueueMax Latency TargetBandwidth Guarantee
Class 1 — VR RealtimeVR headset streaming, haptic feedbackEF (46)Expedited ForwardingQ0 (Strict Priority)≤ 2 ms (intra-wing)Reserved 20% WAN
Class 2 — Interactive GamingOnline gaming (UDP), game downloadsCS4 (32)Assured Forwarding 41Q1≤ 10 ms (intra-wing)15% WAN
Class 3 — VoIP / Video CallZoom, Teams, phone calls, intercomEF (46) / CS3Expedited ForwardingQ0 (Strict Priority)≤ 5 ms5% WAN (reserved)
Class 4 — Streaming Video4K/8K/HDR Netflix, YouTube, VoDAF41 (34)Assured Forwarding 41Q2≤ 50 ms30% WAN
Class 5 — Business-CriticalNAS I/O, hypervisor traffic, backupsAF31 (26)Assured Forwarding 31Q2≤ 100 ms15% WAN
Class 6 — General Web / AppsHTTP/S browsing, app trafficCS0 (0) / AF21Best Effort / AssuredQ3Best Effort10% WAN
Class 7 — Bulk TransferSoftware updates, large downloads, torrentsCS1 (8)Scavenger / Lower EffortQ4Best Effort (deprioritized)5% WAN (remaining)
Class 8 — Network ControlOSPF, VRRP, BGP, STP BPDUs, SNMPCS6 (48) / CS7Network ControlQ0 (Strict Priority)≤ 1 msAlways served first
§ 14
Security Architecture
Multi-Layer Defense-in-Depth Network Security Framework

The security architecture of the compound network employs a defense-in-depth strategy, in which multiple independent security controls exist at every layer of the network stack. No single security control is relied upon exclusively. The compromise of any one layer or control does not grant an attacker unrestricted access to compound network resources. The following table summarizes the security controls applied at each layer.

TABLE 14-01 Security Controls by Network Layer
LayerSecurity ControlMechanism / ProtocolEnforcement Point
WAN / ISPIngress filtering; DDoS mitigationBGP blackholing, ISP-level scrubbingEdge routers + ISP
EdgeStateful firewall; NAT; IDS/IPSMikroTik RouterOS firewall chains; connection tracking; Suricata IDS integrationEdge routers (RTR-01/02)
EdgeVPN gatewayWireGuard + IPsec IKEv2 for remote accessEdge routers
Core / DistributionVLAN isolation; ACLsIEEE 802.1Q; IP ACLs at SVI level on core switchesCORE-SW-01/02, Wing Cores
AccessPort security; DHCP snoopingMAC limiting per port; DHCP snooping binding table; DAI (Dynamic ARP Inspection)All access switches
Access802.1X port authenticationIEEE 802.1X with RADIUS back-end (FreeRADIUS or Cisco ISE)Wired ports; wireless SSID
WirelessWPA3-Enterprise authenticationIEEE 802.11be WPA3-Enterprise with 192-bit security suite (EAP-TLS)All APs — NGRF-PRIVATE, NGRF-WORK, NGRF-VR SSIDs
WirelessWPA3-PersonalSAE (Simultaneous Authentication of Equals) with strong passphraseNGRF-GUEST, NGRF-IOT SSIDs
WirelessSSID isolation; AP isolationClient isolation per VLAN; no peer-to-peer traffic on GUEST / IOT SSIDsAll APs
WirelessManagement Frame ProtectionIEEE 802.11w (PMF — Protected Management Frames) mandatory on all SSIDsAll APs
Network-wideDNS securityDNS-over-TLS (DoT) or DNS-over-HTTPS (DoH) to upstream resolver; internal DNS server for split-horizon resolutionEdge router + internal DNS
Network-wideNTP authenticationAuthenticated NTP (NTPsec) synchronization; all devices locked to internal NTP serverAll managed devices
ManagementOOB management networkSeparate management VLAN (VLAN 1) accessible only via jump server; no direct management access from user VLANsAll infrastructure devices
ManagementEncrypted management protocolsSSHv2 only (no Telnet); HTTPS-only web management; SNMPv3 with auth+privacyAll infrastructure devices
§ 15
Physical Cabling Infrastructure
Structured Cabling, Fiber Backbone, and Conduit Specification

The physical cabling infrastructure is the passive foundation upon which all active network components operate. Deficiencies in the physical cabling plant will limit the performance of all overlying active equipment, regardless of how capable that equipment may be. This specification therefore imposes strict requirements on all cabling media, connectors, termination quality, and conduit installation, conforming to TIA-568.2-D (copper) and TIA-568.3-D (fiber).

TABLE 15-01 Cabling Media Specification by Application
ApplicationCable Category / TypeConnectorMax Segment LengthSupported Data RateDeployment Zone
AP to Access Switch (PoE++ uplink)Cat 6A U/FTP (23 AWG, shielded)RJ45 (T568B)90m (channel) + 10m patch2.5GBASE-T (2.5 Gbps)All floors, all wings
Wired workstation dropsCat 6A U/FTP (23 AWG, shielded)RJ45 (T568B)90m (channel) + 10m patch2.5GBASE-T / 10GBASE-T (2.5 or 10 Gbps)All wings (workstation zones)
Access Switch → Wing Core (uplink)OS2 SM Fiber (9/125 μm)LC/UPC Duplex10 km (well within compound)25GBASE-LR (25 Gbps) — 2 fibers per link; 2×25G LACP = 50G per switchIDF to Wing Core MDF; 25GBASE-LR SFP28 transceivers both ends
Wing Core → Central Core (Wings A–D uplink)OS2 SM Fiber (9/125 μm)LC/UPC Duplex10 km100GBASE-LR4 (100 Gbps) — 1×100G to each core chassis; 2×100G MLAG = 200G per wingWing MDF to Central MDF; 100GBASE-LR4 QSFP28; C9300X-NM-4C required on both ends
Wing Core → Central Core (Underground uplink)OS2 SM Fiber (9/125 μm)LC/UPC Duplex10 km25GBASE-LR (25 Gbps) — 1×25G to each core chassis; 2×25G MLAG = 50GUG MDF to Central MDF; 25GBASE-LR SFP28; native C9300X-24Y SFP28 ports
Central Core → Edge Router (uplink)OS2 SM Fiber (9/125 μm)LC/UPC Duplex10 km25GBASE-LR (25 Gbps) — via C9300X-24Y native SFP28 ports (NM-4C ports reserved for wing uplinks)Core MDF to Edge Router; 25GBASE-LR SFP28 transceivers
ISP ONT → Edge RouterOS2 SM Fiber (9/125 μm) or Cat 6ALC/UPC or RJ4510 km / 100m10GBASE-LR / 1GBASE-TNDR room to Equipment Room
Underground tunnel backboneOS2 SM Fiber (9/125 μm) — armoredLC/UPCPer run — up to 500m10GBASE-LRAll underground conduit runs
Exterior perimeter runsOS2 SM Fiber — direct burial armoredLC/UPC (weatherproof enclosures)Per run10GBASE-LROutdoor conduit, direct burial
Core inter-chassis cross-connect (ICL)OS2 SM Fiber (9/125 μm)LC/UPC Duplex< 5m (within equipment room)25GBASE-LR (25 Gbps) — 2×25G LACP = 50G ICL aggregate; native C9300X-24Y SFP28 ports (25GBASE-SR or DAC acceptable at <5m)Equipment room — CORE-SW-01 to CORE-SW-02
¹ Cat 6A shielded (U/FTP) is mandatory throughout for PoE++ runs. Unshielded Cat 6A may be acceptable for non-PoE wired drops subject to engineer approval. ² All fiber runs shall be tested with an OTDR at 1310 nm and 1550 nm wavelengths post-installation. Test results shall be archived. ³ All Cat 6A runs shall be tested to TIA-568-C.2 Cat 6A specification minimum.
§ 16
Power Budget and PoE Analysis
Power Delivery Calculation and UPS/Generator Sizing
TABLE 16-01 Network Infrastructure Power Consumption Summary
ComponentQty.Power per Unit (W)Total (W)Notes
MikroTik CCR2004 Edge Router2~75W (loaded)150WIncludes dual PSU overhead
Cisco Catalyst 9300X Central Core2~250W (loaded, no PoE)500WSwitching fabric + SFP+ transceivers
Cisco Catalyst 9300X Wing Core5~180W (loaded, no PoE)900WPer-wing distribution, no PoE at this tier
Ubiquiti Enterprise 24 PoE (switch only)~20~50W (switch, excluding PoE load)1,000WEstimate based on 4 floors × 5 wings; actual count TBD
PoE Budget — APs per switch~20 switches × 12 APs avg.~40W per AP (GT-BE98 Pro)~9,600W240 APs × 40W; 400W PoE budget per switch × 20 = 8,000W allocated
Asus ROG GT-BE98 Pro (PoE++)~40 (anchor APs)~40W1,600WIncluded in PoE budget above
Asus ZenWiFi Pro ET12 (PoE++)~200 (density APs)~30–35W~6,500WIncluded in PoE budget above
Server / NAS infrastructure~6 units (est.)~300W avg.~1,800WVLAN 70 rack equipment
NMS / monitoring server1~150W150WUnifi Controller + NMS + logging
Cooling (network rooms only)Per room~500W~2,000W4 wing MDF rooms + central equipment room
ESTIMATED TOTAL NETWORK INFRASTRUCTURE DRAW~18,600W~18.6 kW peak; design UPS/generator for ≥ 25 kW with headroom
NOTE
All networking equipment rooms, including the central equipment room and each wing's MDF, shall be served by an Uninterruptible Power Supply (UPS) of sufficient capacity to sustain all connected equipment for a minimum of 30 minutes at full load, and by a compound-wide standby generator that is capable of powering the full networking infrastructure load continuously. The UPS shall be on a dedicated electrical circuit, separate from general building loads, and shall be monitored via SNMP by the NMS for battery health, load percentage, and estimated runtime.
§ 17
Performance Benchmarks and SLA
Design-Phase Performance Targets — Internal Network SLA

The following performance benchmarks represent the design-phase targets for the compound network under normal operating conditions (no active equipment failures, no extreme concurrent load events). These figures shall form the basis of internal performance validation testing during network commissioning and shall be re-validated periodically thereafter. All benchmark figures are accompanied by the measurement conditions under which they apply.

TABLE 17-01 Network Performance Benchmark Targets
Benchmark MetricTarget ValueMeasurement ConditionsAcceptable Minimum
WAN Downstream — PLDT 10G (single host)≥ 9,500 Mbps (9.5 Gbps)iperf3 to external speedtest node; single wired client on VLAN 10≥ 8,000 Mbps
WAN Aggregate (all 3 ISPs)≥ 11,500 MbpsSimultaneous multi-stream to distinct external endpoints via each ISP≥ 10,000 Mbps
Wired LAN throughput (intra-core)≥ 2,400 Mbps per client (2.5G link)iperf3 between two wired clients on same access switch; Cat 6A 2.5G ports≥ 2,200 Mbps
Wi-Fi 7 Single Client (6 GHz, 320 MHz, 4×4)≥ 4,000 Mbps (real-world)Wi-Fi 7 MLO client at 2m distance from AP; iperf3 UDP; 6 GHz Low radio≥ 3,000 Mbps
Wi-Fi 7 MLO Aggregate (3 bands)≥ 9,000 Mbps (real-world)MLO-capable Wi-Fi 7 client at 2m; iperf3 multi-stream across all MLO links≥ 7,000 Mbps
Intra-wing wireless RTT (same AP)≤ 1 msPing between two Wi-Fi 7 clients on same AP; 1000-packet ICMP test≤ 2 ms
Intra-wing wireless RTT (adjacent APs)≤ 3 msPing between two clients on adjacent APs, same floor; 802.11r roaming established≤ 5 ms
Cross-wing wireless RTT (core path)≤ 8 msPing between clients on Wing A and Wing D; traffic path through core≤ 12 ms
VR streaming latency (Wing D intra-wing)≤ 3 ms (one-way)Simulated VR workload; UDP 72 Mbps stream; timestamped packet RTT / 2≤ 6 ms
WAN failover time (PLDT → Globe)≤ 3 secondsHard disconnect PLDT ONT; measure time to restored internet on test host≤ 5 seconds
Core switch failover time (CORE-SW-01 failure)≤ 1 secondHard power-off CORE-SW-01; measure interruption time on active TCP session≤ 2 seconds
Edge router VRRP failover≤ 3 secondsHard power-off EDGE-RTR-01; measure interruption time on active TCP session≤ 5 seconds
Wireless roaming transition (802.11r)≤ 20 msMobile client walking between adjacent APs; measure RSSI and re-auth time≤ 50 ms
Concurrent wireless clients (compound-wide)≥ 500 simultaneous clientsAll APs loaded with simulated clients via Wi-Fi performance test framework≥ 300 clients
Network availability (uptime)≥ 99.95% per calendar year (≤ 4.4 hrs downtime)Continuous monitoring via NMS; includes all planned maintenance windows≥ 99.9%

The following visual representation provides a relative performance comparison between the key wireless throughput metrics of this design and prior Wi-Fi generations, illustrating the magnitude of the improvement realized by the Wi-Fi 7 quad-band architecture.

Wi-Fi 7 MLO — 3-Band Aggregate (Real-World Target)≥ 9,000 Mbps
Wi-Fi 7 Single Radio — 6 GHz 320 MHz (Real-World)≥ 4,000 Mbps
Wi-Fi 6E — 6 GHz 160 MHz (Real-World, for comparison)~1,800 Mbps
Wi-Fi 6 — 5 GHz 80 MHz (Real-World, for comparison)~800 Mbps
Wired 2.5G (CAT6A to 2.5GBASE-T port)2,400 Mbps
§ 18
High Availability and Failover Analysis
Redundancy Coverage Matrix and Single-Point-of-Failure Analysis
TABLE 18-01 Redundancy Coverage Matrix — Single Failure Scenario Analysis
Failure ScenarioFailed ComponentRedundant PathFailover MechanismEst. DowntimeImpact
Primary ISP failurePLDT ONT / circuitGlobe 1G + Converge 1GBGP route withdrawal, auto-failover< 3 secondsWAN speed reduced to 2 Gbps; zero LAN impact
Secondary ISP failure (during PLDT outage)Globe circuitConverge 1GBGP withdrawal, route to Converge< 3 secondsWAN speed reduced to 1 Gbps; LAN unaffected
Edge router failure (RTR-01)EDGE-RTR-01 (MASTER)EDGE-RTR-02 (BACKUP)VRRP promotion; BACKUP → MASTER~3 secondsBrief TCP interruption; UDP (gaming/VR) resumes < 1s
Central core switch failureCORE-SW-01CORE-SW-02 via cross-linkLACP uplink failure; wing cores re-route via surviving core< 1 secondMinimal — cross-chassis LACP handles transparently
Wing core switch failuree.g., WING-A-CORENone (single wing core per wing)N/A — Wing A access switches lose uplinkUntil replacedWing A LAN and Wi-Fi offline until replacement
Access switch failuree.g., ACCSS-A-FL2None (single access switch per floor)N/A — Floor 2 Wing A ports offlineUntil replacedFloor 2 Wing A APs and wired drops offline; other floors unaffected
Single AP failureAny one APAdjacent APs (overlap coverage)802.11k/r — clients roam to neighbor APs automatically< 1 second (client roam)Minor coverage reduction; no connectivity loss for mobile clients
Edge router PSU failureOne PSU on RTR-01Secondary PSU on same routerAutomatic (hot-swap PSU redundancy)ZeroNone
Core switch PSU failureOne PSU on CORE-SW-01Secondary PSU on same switchAutomatic (hot-swap)ZeroNone
UPS failure (mains present)UPS battery/inverterMains power continuesBypass relayZero (brief < 20ms relay switch)UPS battery protection lost; equipment remains online on mains
Power outage (mains loss)Mains electricityUPS (30 min) → GeneratorUPS instantaneous; generator typically starts in 10–30sZero (UPS covers generator spin-up)None for network equipment within UPS/generator coverage
¹ Wing Core switches represent the only layer in the design without hardware redundancy. A future revision of this specification may address this with dual wing core switches per wing for critical wings (A and D). ² AP count and placement provide inherent redundancy at the wireless layer; a single AP failure is invisible to mobile clients within standard deployment density.
§ 19
Network Management and Monitoring
Centralized Network Operations and Observability Framework

All network infrastructure components in this design are fully managed devices, exposing complete management interfaces for centralized configuration, monitoring, and telemetry collection. The compound shall maintain a dedicated Network Management System (NMS) instance, deployed on the SERVER-LAN VLAN (VLAN 70) on dedicated server hardware within the equipment room. The NMS provides the single pane of glass through which all network infrastructure is observed and administered.

TABLE 19-01 Management Platform Assignment by Device Class
Device ClassManagement PlatformProtocolKey Functions
MikroTik Edge RoutersThe Dude (MikroTik) + Grafana + PrometheusRouterOS API, SNMP v3, REST, SSHReal-time traffic graphs, BGP state, VRRP health, firewall hit counters, CPU/RAM utilization
Cisco Catalyst Core/Wing SwitchesCisco DNA Center or Catalyst Center + SNMPNETCONF/YANG, gRPC telemetry, SNMP v3, SSHInterface utilization, STP topology, VLAN state, MAC table, spanning tree events, QoS queue statistics
Ubiquiti Access SwitchesUbiquiti UniFi Network ControllerProprietary UniFi, SNMP v3PoE budget real-time, port utilization, VLAN assignment, firmware management, topology view
Asus Wi-Fi 7 Access PointsAsus AiMesh Controller + UniFi integration (where applicable)AiMesh proprietary, SNMP, TR-069Client association, RF channel utilization, RSSI heatmaps, roaming event logs, MLO band state
All devices (unified)Grafana + InfluxDB + Prometheus stackSNMP polling, gRPC streaming, syslogUnified dashboard — all KPIs, alerts, historical trending, SLA reporting
Syslog collectionGraylog or OpenSearch / ELK StackSyslog (UDP 514 / TCP 6514 TLS)Centralized log aggregation, security event correlation, audit trail
Network TimeInternal NTP server (Chrony)NTPv4Authoritative time for all network devices and servers; GPS-disciplined if available
DNSPi-hole + Unbound (dual server)DNS-over-TLS upstream; standard DNS internalSplit-horizon DNS, ad filtering, internal hostname resolution, DHCP integration
§ 20
Conclusion and Future Expansion Roadmap
Summary of Design Intent and Strategic Forward Planning

The network infrastructure defined in this document, designated NGRF-NET-001 Revision 1.0, constitutes a fully realized, enterprise-grade, campus-scale network designed for deployment in a multi-wing residential compound. The architecture provides an aggregate WAN capacity of twelve gigabits per second across three independent ISP connections, a switching fabric with a combined central core capacity of nine hundred and sixty gigabits per second, and a wireless fabric capable of delivering approximately nine to nineteen gigabits per second of aggregate Wi-Fi 7 throughput to a single client at close range, with compound-wide wireless capacity scaling proportionally with the number of deployed access points.

This network shall not merely meet the demands of today's applications — it shall exceed the demands of the next decade's applications, deployed and operational before those applications exist.

The design achieves its stated goals across all five primary design dimensions:

High Throughput is achieved through the deployment of PLDT's 10G fiber plan as the primary WAN, 10G fiber uplinks at every distribution tier, 2.5G PoE++ at every AP drop point, and IEEE 802.11be Wi-Fi 7 with 320 MHz 6 GHz channels and Multi-Link Operation across all access points. No link in the path from ISP to client is a bottleneck relative to the client's maximum achievable throughput.

Redundancy and High Availability is achieved through a comprehensive layered redundancy strategy: three ISPs, dual edge routers in VRRP, dual core switches in cross-connected HA, LACP uplinks at every distribution tier, redundant PSUs in all critical equipment, and UPS plus generator backup power. The result is a network that survives virtually any single hardware failure without service interruption.

Seamless Wireless Roaming is achieved through dense Wi-Fi 7 AP deployment conforming to per-zone coverage engineering targets, universal deployment of IEEE 802.11k/r/v roaming assistance protocols, and the MLO capability of Wi-Fi 7 which inherently reduces roaming latency by maintaining multiple simultaneous radio associations. A client moving at walking pace through the compound will experience zero perceptible wireless connectivity interruption.

VR Readiness is achieved through the combined effect of the 6 GHz quad-band Wi-Fi 7 platform, dense AP spacing in Wing D (VR arena), strict QoS enforcement placing VR traffic in the Expedited Forwarding queue at all network layers, and the MLO architecture's inherent latency advantages. The target wireless latency of ≤ 3 ms one-way (≤ 6 ms RTT) within Wing D is well within the ≤ 7 ms tolerance of consumer and professional VR headsets.

Scalability is achieved by specifying enterprise-grade components at all switching tiers, all of which support expansion of port count, additional stacking members, and additional VLANs without architectural redesign. The addition of new floors, wings, or facilities to the compound requires only the addition of further wing core and access switching capacity, with the central core and edge layer having abundant capacity reserve to absorb substantial growth.

TABLE 20-01 Future Expansion Roadmap and Upgrade Provisions
Roadmap ItemDescriptionPriorityDependenciesEst. Timeline
Wing Core RedundancyAdd second Cisco 9300X per wing (dual wing cores) to eliminate the one remaining single-point-of-failure per wingHIGHBudget authorization; additional rack space per wing MDFYear 2 post-deployment
25G Access Layer UpgradeReplace 2.5G access switches with 25G multi-gigabit switches as Wi-Fi 7 AP maximum practical throughput approaches 10G per APMEDIUMClient device Wi-Fi 7 adoption; AP firmware maturityYear 3–5
Wi-Fi 7 Rev. 2 / Wi-Fi 8 ReadyCabling, PoE, and management infrastructure are sized to support next-generation AP hardware with no changes beyond AP replacementLOW (pre-planned)IEEE 802.11bn (Wi-Fi 8) ratification; compatible hardware availabilityYear 5–7
PLDT 100G UpgradeUpgrade PLDT ISP connection to 100G if/when commercially available in PH; edge router replacement to CCR2216 or equivalent requiredLOWPLDT commercial 100G residential availability in PHYear 5+
Private 5G / mmWave OverlayDeploy private 5G NR small cells or mmWave 802.11ay point-to-point links for highest-density zones (Wing D VR arena) as complementary ultra-low-latency layerRESEARCHNTC licensing for private 5G; hardware maturity and costYear 3–5
SD-WAN OverlayImplement SD-WAN software overlay (Cisco Viptela or MikroTik MPLS) across all three ISPs for application-aware intelligent path selection beyond basic BGP policyMEDIUMSD-WAN platform licensingYear 2
Zero Trust Network Access (ZTNA)Implement ZTNA platform (e.g., Cloudflare Zero Trust, Cisco Duo) for all remote access and inter-VLAN access policy, replacing static ACL-based controlsMEDIUMIdentity provider integration; client agent deploymentYear 2–3
Appendix A
§ A
IP Address Allocation Plan
IPv4 Addressing Schema — NGRF-NET-001
TABLE A-01 IPv4 Subnet Allocation — Complete Addressing Plan
Network BlockSubnet MaskVLANGateway (HSRP VIP)DHCP RangeStatic RangePurpose
10.0.1.0/24 (256 hosts)VLAN 110.0.1.110.0.1.10–20010.0.1.201–254Network Management / OOB
10.0.10.0/23 (512 hosts)VLAN 1010.0.10.110.0.10.10–51010.0.10.200–511Trusted Private (Principal + Family)
10.0.20.0/24 (256 hosts)VLAN 2010.0.20.110.0.20.10–20010.0.20.201–254Workstations / Creative Lab
10.0.30.0/24 (256 hosts)VLAN 3010.0.30.110.0.30.10–200VR Headsets / Gaming (QoS Priority)
10.0.40.0/23 (512 hosts)VLAN 4010.0.40.110.0.40.10–510Guest Wi-Fi (Internet-only)
10.0.50.0/23 (512 hosts)VLAN 5010.0.50.110.0.50.10–510IoT Devices (Isolated)
10.0.60.0/24 (256 hosts)VLAN 6010.0.60.110.0.60.10–20010.0.60.200–254Security Cameras / NVR
10.0.70.0/24 (256 hosts)VLAN 7010.0.70.1Static only10.0.70.1–254Servers / NAS / Hypervisors
10.0.80.0/24 (256 hosts)VLAN 8010.0.80.110.0.80.10–20010.0.80.201–254VoIP / Intercom
10.0.90.0/28 (16 hosts)VLAN 9010.0.90.1Static only10.0.90.2–14DMZ (externally accessible services)
10.0.99.0/24 (256 hosts)VLAN 9910.0.99.110.0.99.10–250Quarantine (non-compliant devices)
10.0.200.0/30 (4 hosts)Static10.0.200.1–2VRRP Link — RTR-01 ↔ RTR-02
10.0.201.0/30 (4 hosts)Static10.0.201.1–2OSPF Link — RTR-01 ↔ CORE-SW-01
10.0.202.0/30 (4 hosts)Static10.0.202.1–2OSPF Link — RTR-01 ↔ CORE-SW-02
10.0.203.0/30 (4 hosts)Static10.0.203.1–2OSPF Link — RTR-02 ↔ CORE-SW-01
10.0.204.0/30 (4 hosts)Static10.0.204.1–2OSPF Link — RTR-02 ↔ CORE-SW-02
Appendix B
§ B
Complete Bill of Materials
Procurement Reference — NGRF-NET-001 Baseline
NOTE
Quantities marked as "TBD" in the access switching and AP tiers are subject to finalization following completion of architectural floor plans and RF site survey. The figures below represent conservative minimum estimates based on zone characterization in Section 4. All pricing is indicative and subject to change.
TABLE B-01 Bill of Materials — Network Infrastructure — NGRF-NET-001
LineComponentPart Number / SKUQty.Unit Price (USD est.)Extended (USD est.)Lead Time
1MikroTik CCR2004-1G-12S+2XS Edge RouterCCR2004-1G-12S+2XS2$1,450$2,9002–4 wks
2Cisco Catalyst 9300X-24Y-A Switch (central core ×2 and wing core ×5 — all tiers now unified on same chassis)C9300X-24Y-A7$12,000$84,0004–8 wks
3Cisco C9300X-NM-4C Network Module (4× QSFP28 100G/40G dual-rate) — installed in all 7 chassisC9300X-NM-4C=7$3,800$26,6002–4 wks
3b100GBASE-LR4 QSFP28 Transceiver, OS2 SM, LC Duplex, 10km (wing-to-core; 2 per wing × 4 wings × 2 ends = 16)QSFP-100G-LR4-S=16$650$10,4001–2 wks
4Ubiquiti UniFi Switch Pro XG 24 PoE (16×10G + 8×2.5G PoE+++ RJ45, 2×25G SFP28 uplink, 720W PoE)USW-Pro-XG-24-PoE~20 (TBD)$1,799~$35,9801–3 wks
4b25GBASE-LR SFP28 Transceiver, OS2 SM, LC Duplex, 10km (access-to-wing + UG uplinks + ICL; ~4 per access switch pair + core)SFP-25G-LR-S=120$85$10,2001–2 wks
5Asus ROG Rapture GT-BE98 Pro (Wi-Fi 7 Quad-Band)GT-BE98 Pro~40 (TBD)$699~$27,9601–2 wks
6Asus ZenWiFi Pro ET12 (Wi-Fi 7 Tri/Quad-Band)ZenWiFi Pro ET12~200 (TBD)$399~$79,8001–3 wks
7OS2 SM Fiber Bulk Roll (9/125, LSZH) — 1kmOS2-SM-LSZH-1000M10 rolls$280$2,8001 wk
8Cat 6A U/FTP Cable Bulk Roll (23 AWG, LSZH) — 305mCat6A-UFTP-305M60 rolls$185$11,1001–2 wks
910G SFP+ LC/UPC Single-Mode Transceiver (10GBASE-LR)SFP-10G-LR80$45$3,6001 wk
1025G SFP28 LC/UPC Single-Mode Transceiver (25GBASE-LR)SFP28-25G-LR20$120$2,4001–2 wks
11LC/UPC Duplex Fiber Patch Cord (OS2, 3m)OS2-LC-LC-3M200$12$2,4001 wk
1224-port LC Fiber Patch Panel (1U)FPP-24LC-1U20$95$1,9001 wk
1324-port Cat 6A Keystone Patch Panel (1U, shielded)PP-CAT6A-24SH30$120$3,6001 wk
14APC Smart-UPS SRTL 10kVA 208V (network room UPS)SRTL10KRM4U5$6,800$34,0003–6 wks
1519" 42U Network Equipment Rack (with cable management)NetRack-42U-8008$650$5,2001–2 wks
16Server (NMS + UniFi Controller)Custom / Dell R550 equiv.1$4,500$4,5002–4 wks
17RADIUS Server / DNS Server (VM or dedicated)VM on NMS server1$0 (software)$0
18Ubiquiti UniFi Network Controller License (Perpetual)UniFi-Network-Enterprise1$0 (self-hosted)$0
19FreeRADIUS / Cisco ISE VM LicenseISE-VM-K9 or FOSS1$0–$3,000~$1,500
20Grafana + InfluxDB + Prometheus Stack (self-hosted)OSS software1$0$0
21Cat 6A RJ45 Shielded Keystone Jacks (bag of 50)KJ-CAT6A-SH-5020 bags$38$7601 wk
22Armored OS2 SM Fiber — Direct Burial (per 500m reel)OS2-ARM-DB-500M4 reels$520$2,0801–2 wks
23Cisco IOS-XE DNA Advantage License (per switch, 3yr)C9300-DNA-A-48-3Y7$1,800$12,600Electronic
24MikroTik Rack Mount Kit for CCR2004CCR2004-RM-KIT2$25$501 wk
25Structured Cabling Installation Labor (est.)Contractor / LOE1 (lot)~$15,000–$25,000Per schedule
ESTIMATED TOTAL MATERIAL + EQUIPMENT COST (USD) — REV. 1.1~$388,130Excl. labor, taxes, import duties, and contingency. Reflects unified C9300X-24Y chassis across all tiers, NM-4C modules ×7, 100GBASE-LR4 QSFP28 transceivers, 25GBASE-LR SFP28 transceivers, and USW-Pro-XG-24-PoE access switch upgrade.
Add 20% contingency + 12% VAT + import duties (PH)~$510,000–$545,000All-in total estimate (USD); subject to final procurement
NOTE
The dominant cost driver in this BOM is the Wi-Fi 7 access point tier (Lines 5 and 6), representing approximately 35% of total material cost. This is expected and appropriate — the wireless layer is the most user-visible and performance-critical component of the network, and density cannot be sacrificed without directly compromising the VR latency, roaming, and throughput targets established in Sections 11 and 17. The Cisco switching layer (Lines 2 and 3) represents approximately 23% of material cost and is non-negotiable for the enterprise-grade L3 routing, HA, and telemetry requirements of this specification.
Document Control Footer
NGRF-NET-001 — Revision 1.0
Residential Campus Network Infrastructure Technical Specification
Prepared by: F.G.D. Robledo  ·  NewGen / NGRF  ·  April 10, 2026
Classification: UNCLASSIFIED // PROPRIETARY
Sections: 20 + 2 Appendices  ·  Tables: 22  ·  Figures: 2
Standards Compliance:
IEEE 802.11be (Wi-Fi 7) · IEEE 802.3bt (PoE++) · IEEE 802.1Q · RFC 5798 (VRRP)
RFC 4271 (BGP-4) · TIA-568.2-D / 568.3-D · WPA3-Enterprise · NTC MC 05-08-2020

Next Review Date: April 10, 2027 or upon major scope change
ECN Register: See attached Engineering Change Notice log (NGRF-NET-001-ECN)
Principal Engineer — F.G.D. Robledo
Technical Reviewer
Date of Final Approval
DISCLAIMER: This document is prepared for design planning purposes. All component specifications, pricing, and performance figures are based on manufacturer data sheets, published standards, and engineering estimates as of April 2026. Actual deployed performance will vary based on RF environment, client device capabilities, ISP service quality, and installation quality. The principal engineer assumes no liability for deviations between specified and realized performance attributable to factors outside the control of the network design. All regulatory compliance obligations (NTC frequency licensing, PEC electrical compliance) remain the responsibility of the deploying party and their licensed contractors.