| Lesson 10 |
Remote Procedure Calls (RPC) |
| Objective |
Check the status of the portmapper process rpcbind and use the rpcinfo command to determine what RPC services are available |
Remote Procedure Calls: Architecture, Management, and Diagnostics
Understanding Remote Procedure Calls
Remote Procedure Call (RPC) represents one of the foundational distributed computing technologies in Unix and Linux systems. Developed by Sun Microsystems in the 1980s and formalized in RFC 1057 (later updated in RFC 5531), RPC enables a program running on one computer to execute subroutines on a remote computer as if they were local function calls. This abstraction shields developers from the complexity of network communication—socket creation, connection management, data serialization, error handling—allowing them to write distributed applications using familiar procedural programming patterns.
The RPC model follows a client-server architecture where a client process invokes a procedure that executes on a remote server, receives results, and continues execution. From the programmer's perspective, the remote call looks syntactically identical to a local function call. The RPC runtime system handles all network communication transparently, managing connection establishment, parameter marshaling (converting language-specific data structures into a network-transmittable format), transmission over TCP or UDP, remote execution, result marshaling, and return value delivery.
Why RPC Was Revolutionary
Before RPC, building distributed applications required extensive low-level network programming. Developers manually created sockets, established connections, implemented custom serialization protocols, handled network errors, and managed all aspects of client-server communication. Every distributed application reinvented these mechanisms, leading to incompatible implementations, subtle bugs, and substantial development overhead.
RPC standardized this process through automatically generated stub code. A developer would define remote procedures in an Interface Definition Language (IDL), and the RPC compiler (rpcgen) would generate client stubs (which package parameters and invoke the remote procedure) and server skeletons (which unpack parameters, call the actual implementation, and return results). This code generation eliminated entire classes of errors and accelerated distributed application development.
Sun Microsystems built critical network services on RPC including Network File System (NFS) for remote file access and Network Information Service (NIS/NIS+) for centralized user and configuration management. These services became ubiquitous in Unix environments throughout the 1990s and remain in use today, though their prominence has declined with the rise of alternative technologies.
RPC Architecture and Protocol Mechanisms
Understanding RPC's architectural components clarifies how the protocol operates and why services like rpcbind are necessary for RPC functionality.
The RPC Communication Model
An RPC transaction follows this sequence:
- Client Invocation: The client application calls what appears to be a local procedure. This is actually a client stub generated by rpcgen.
- Parameter Marshaling: The client stub packages (marshals) the procedure's parameters into a network message using External Data Representation (XDR), RPC's standardized data encoding format.
- RPC Call Message: The client stub sends an RPC call message over the network to the server. This message includes the RPC program number, version number, procedure number, and marshaled parameters.
- Server Reception: The server's RPC runtime receives the message and dispatches it to the appropriate server stub.
- Parameter Unmarshaling: The server stub unpacks (unmarshals) the parameters from XDR format back into native data structures.
- Procedure Execution: The server stub calls the actual procedure implementation with the unmarshaled parameters.
- Result Marshaling: After execution, the server stub marshals the return value into XDR format.
- RPC Reply Message: The server stub sends an RPC reply message containing the marshaled result back to the client.
- Result Unmarshaling: The client stub receives the reply, unmarshals the return value, and returns it to the calling application.
- Client Continuation: The client application receives the return value and continues execution as if a local procedure had completed.
This entire process is transparent to application code—the developer writes what looks like a normal function call, and the RPC infrastructure handles all network communication.
XDR: External Data Representation
Different computer architectures represent data differently—byte order (endianness), floating-point formats, character encodings, and structure padding vary across platforms. RPC solves this heterogeneity problem through XDR (RFC 4506), a platform-independent data serialization standard.
XDR defines canonical representations for integers (32-bit big-endian), floating-point numbers (IEEE 754), strings (length-prefixed UTF-8), arrays, structures, and unions. Both client and server convert between their native data representation and XDR format during marshaling and unmarshaling. This ensures that an Intel x86 client can communicate with a SPARC server, or a Linux client with a Solaris server, without data corruption or misinterpretation.
XDR influenced later serialization formats including Protocol Buffers and Apache Thrift. The concept of a platform-independent intermediate representation for distributed communication became fundamental to distributed systems design.
RPC Program Numbers and Versioning
Unlike traditional network services that use well-known port numbers (HTTP on 80, SSH on 22), RPC services are identified by program numbers, version numbers, and procedure numbers. This three-level identification scheme provides flexibility and evolution capabilities:
Program Number: A 32-bit unsigned integer uniquely identifying an RPC service. Program numbers are divided into ranges:
- 0x00000000 - 0x1fffffff: Defined by Sun/Oracle (NFS, NIS, etc.)
- 0x20000000 - 0x3fffffff: User-defined services
- 0x40000000 - 0x5fffffff: Transient programs (dynamically assigned)
- 0x60000000 - 0xffffffff: Reserved
For example, NFS uses program number 100003, NIS uses 100004, and the mount daemon uses 100005.
Version Number: Allows multiple versions of the same RPC program to coexist. When a service evolves with incompatible changes, the version number increments. Clients specify which version they support, enabling gradual migration. NFS has evolved through multiple versions (NFSv2, NFSv3, NFSv4) each with different capabilities and protocols.
Procedure Number: Within a program and version, individual procedures are numbered. Each procedure corresponds to a specific remote operation. For instance, NFS version 3 defines procedures for reading files (6), writing files (7), creating directories (9), and dozens of other file system operations.
When a client invokes an RPC, it specifies all three identifiers. The server must implement the requested program number and version, and the procedure number determines which specific operation to execute.
The Portmapper: rpcbind Service
The dynamic nature of RPC program numbers creates a discovery problem: how does a client determine which TCP or UDP port a particular RPC service is listening on? Unlike HTTP servers predictably on port 80, RPC servers bind to arbitrary available ports when they start. The portmapper service (originally called portmap, now rpcbind in modern systems) solves this discovery problem.
How the Portmapper Works
The portmapper is the only RPC service with a well-known port: TCP and UDP port 111. When an RPC server starts, it:
- Binds to an available ephemeral port (typically in the range 32768-65535)
- Registers itself with the portmapper by connecting to port 111
- Provides its program number, version number, protocol (TCP/UDP), and the port it's listening on
- The portmapper stores this registration in its internal mapping table
When an RPC client wants to connect to an RPC service, it:
- Connects to the portmapper on port 111
- Issues a GETPORT call specifying the desired program number, version number, and protocol
- The portmapper looks up the registration in its table
- Returns the port number where the service is listening
- The client disconnects from the portmapper and connects directly to the service on the returned port
This two-phase connection process—first query the portmapper for the port, then connect directly to the service—enables dynamic port allocation while maintaining service discoverability.
Critical Importance of rpcbind
Because all RPC services depend on the portmapper for registration and discovery, rpcbind is a single point of failure for RPC-based infrastructure. If rpcbind stops running or crashes:
- New RPC servers cannot register themselves
- RPC clients cannot discover service port numbers
- Existing RPC connections may continue working (they already know the port), but new connections fail
- NFS mounts become inaccessible
- NIS authentication and directory services fail
Error messages mentioning "program not registered" or "RPC: Program not registered" almost always indicate either that rpcbind is not running, or that the requested RPC service hasn't started and registered with rpcbind. Diagnosing RPC issues begins with verifying rpcbind status.
Checking rpcbind Status
Linux administrators must be able to verify that rpcbind is running and healthy. Modern Linux distributions manage rpcbind through systemd, though legacy process inspection commands remain useful.
Using systemd to Check rpcbind
On systemd-based distributions (RHEL 7+, Ubuntu 15.04+, Debian 8+, Fedora, etc.), check rpcbind status:
systemctl status rpcbind
Healthy output looks like:
● rpcbind.service - RPC bind portmap service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2026-02-06 08:15:32 EST; 2h 14min ago
Docs: man:rpcbind(8)
Main PID: 1247 (rpcbind)
Tasks: 1 (limit: 4915)
Memory: 1.2M
CPU: 142ms
CGroup: /system.slice/rpcbind.service
└─1247 /usr/bin/rpcbind -w -f
Feb 06 08:15:32 hostname systemd[1]: Starting RPC bind portmap service...
Feb 06 08:15:32 hostname systemd[1]: Started RPC bind portmap service.
Key indicators:
- Active: active (running) - Service is operational
- Loaded: enabled - Service starts automatically on boot
- Main PID: 1247 - Process ID for troubleshooting
- No error messages in recent log entries
If rpcbind is not running, start it:
systemctl start rpcbind
Enable it to start on boot:
systemctl enable rpcbind
View detailed logs:
journalctl -u rpcbind -n 50
Using Process Commands
Traditional Unix process inspection commands also verify rpcbind status. These work across all Unix variants including systems without systemd:
ps -ef | grep rpcbind
Or on Linux systems:
ps aux | grep rpcbind
Output showing a running rpcbind process:
rpc 1247 1 0 08:15 ? 00:00:00 /usr/bin/rpcbind -w -f
The key elements are:
- User: Typically runs as user 'rpc' or 'rpcuser' for security
- PID: Process identifier
- Command: Full path to rpcbind with arguments
If grep returns no results (except the grep command itself), rpcbind is not running.
Check listening ports to verify rpcbind is accepting connections:
ss -tlnp | grep 111
Or using netstat:
netstat -tlnp | grep 111
Expected output shows rpcbind listening on port 111:
tcp LISTEN 0 128 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=1247,fd=8))
tcp6 LISTEN 0 128 [::]:111 [::]:* users:(("rpcbind",pid=1247,fd=11))
This confirms rpcbind is listening on both IPv4 (0.0.0.0:111) and IPv6 ([::]:111), ready to accept portmapper queries.
Using rpcinfo for RPC Service Discovery
The rpcinfo utility queries rpcbind to discover which RPC services are registered and available. This diagnostic tool is essential for troubleshooting RPC-related issues and understanding which services a system provides.
Basic rpcinfo Usage
List all registered RPC programs on the local system:
rpcinfo -p
Or specify a remote host:
rpcinfo -p hostname
rpcinfo -p 192.168.1.50
The
-p option queries the portmapper and displays all registered services. Sample output:
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 37284 status
100024 1 tcp 38447 status
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049 nfs_acl
100021 1 udp 39571 nlockmgr
100021 3 udp 39571 nlockmgr
100021 4 udp 39571 nlockmgr
100021 1 tcp 42785 nlockmgr
100021 3 tcp 42785 nlockmgr
100021 4 tcp 42785 nlockmgr
Each line represents one registered RPC service. The columns indicate:
- program: RPC program number (100000 is portmapper itself, 100003 is NFS)
- vers: Version number of the RPC program
- proto: Protocol (tcp or udp)
- port: TCP or UDP port where the service listens
- service: Human-readable service name
Notice that the portmapper (program 100000) lists itself multiple times—once for each supported version and protocol combination. This is normal; RPC services commonly support multiple versions and both TCP and UDP.
Advanced rpcinfo Options
Query specific RPC program by number:
rpcinfo -p | grep 100003
Test connectivity to a specific RPC service using a null procedure call (procedure 0, which every RPC service must support):
rpcinfo -t hostname program_number version
For example, test NFS version 3 over TCP:
rpcinfo -t nfs-server 100003 3
Success output:
program 100003 version 3 ready and waiting
Failure indicates the service isn't registered, isn't running, or network connectivity is blocked:
rpcinfo: RPC: Program not registered
program 100003 version 3 is not available
Test using UDP instead of TCP:
rpcinfo -u hostname program_number version
Broadcast to all RPC servers on the local network:
rpcinfo -b program_number version
This sends a broadcast RPC call and lists all hosts that respond, useful for discovering RPC servers without knowing their addresses in advance.
Delete a stale registration from the portmapper (requires root):
rpcinfo -d program_number version
This is useful when a service crashes without properly unregistering, leaving ghost entries in the portmapper table.
Interpreting rpcinfo Output for Troubleshooting
When diagnosing RPC problems, rpcinfo output reveals several key facts:
Service Registration: If a service should be available but doesn't appear in rpcinfo output, it either didn't start, failed during startup, or crashed without unregistering. Check the service's systemd status and logs.
Port Information: RPC services bind to dynamically allocated ports. Firewall rules must allow traffic to these ports, which change whenever services restart. This is why RPC-based services are challenging to secure with traditional port-based firewalls.
Protocol Support: Services listing both TCP and UDP entries support both transport protocols. Clients can choose based on their requirements—UDP for lower latency but unreliable delivery, TCP for reliable ordered delivery with overhead.
Version Compatibility: Multiple version entries indicate backward compatibility. An NFS server supporting versions 3 and 4 can serve both legacy and modern clients. Clients must request a supported version or the connection fails.
Stale Registrations: If a service crashed or was forcibly killed (SIGKILL), it may leave stale entries in the portmapper. The rpcinfo output shows the service registered, but connection attempts fail because nothing is listening on the listed port. Use rpcinfo -d to clean stale entries, then restart the service.
Major RPC Consumers: NFS and NIS
Understanding why critical services depend on RPC clarifies its continued relevance despite the rise of alternative distributed computing models.
Network File System (NFS)
NFS, arguably the most important RPC-based service, enables Unix and Linux systems to mount remote file systems as if they were local. A client accessing files on an NFS share transparently issues RPC calls to the NFS server for operations like reading files, writing files, creating directories, checking permissions, and so on. The RPC abstraction shields both the client kernel and user applications from network complexity—applications use normal file system calls (open, read, write, close), and the kernel's NFS client driver translates these into RPC calls to the remote server.
NFS has evolved through multiple versions, each implemented as distinct RPC programs:
- NFSv2 (RFC 1094): Original version, 32-bit file sizes (2GB limit), UDP-only
- NFSv3 (RFC 1813): Added 64-bit file sizes, TCP support, improved performance
- NFSv4 (RFC 7530): Integrated locking, eliminated portmapper dependency by using fixed port 2049, added strong security
NFSv2 and NFSv3 depend heavily on the portmapper and several auxiliary RPC services (mountd for mount protocol, lockd for file locking, statd for lock recovery). NFSv4 modernized the architecture and no longer requires the portmapper, using only TCP port 2049 for all operations.
Check for NFS services in rpcinfo output:
rpcinfo -p | grep -E "(nfs|mount|nlock)"
NFS server systems will show multiple RPC registrations for NFS itself, the mount daemon, the network lock manager, and related services.
Network Information Service (NIS)
NIS (originally called Yellow Pages, often seen as "yp" in command names) provides centralized management of user accounts, groups, hostnames, and other administrative data across multiple Unix systems. Before LDAP became popular, NIS was the standard solution for maintaining consistent user databases across network environments.
NIS operates as a suite of RPC services:
- ypserv (program 100004): The main NIS server daemon that answers queries for maps (databases)
- ypbind (program 100007): Client daemon that locates and binds to NIS servers
- yppasswd (program 100009): Service for changing user passwords in NIS
- ypxfrd (program 600100069): Map transfer service for replicating NIS databases
When a user logs in to an NIS client system, the login process queries the NIS server via RPC to authenticate credentials and retrieve user information. All of these queries depend on the portmapper for service discovery.
Check for NIS services:
rpcinfo -p | grep yp
Modern environments have largely replaced NIS with LDAP (Lightweight Directory Access Protocol), which provides better security, more flexible schema, and doesn't require RPC. However, many legacy Unix environments still rely on NIS, making understanding of RPC essential for administrators managing these systems.
RPC in the Modern Context
RPC dominated distributed computing in the 1980s and 1990s, but its relevance has declined significantly in contemporary infrastructure. Understanding this evolution helps administrators make informed architectural decisions.
Why RPC Declined
Several factors contributed to RPC's declining adoption:
Firewall Challenges: RPC's dynamic port allocation makes firewall configuration extremely difficult. Unlike HTTP (port 80/443) or SSH (port 22), administrators cannot simply allow specific ports—they must allow port 111 plus an unpredictable range of ephemeral ports that change whenever services restart. This conflicts with security best practices favoring minimal open ports.
NAT Incompatibility: RPC embeds IP addresses in protocol messages, assuming direct end-to-end connectivity. Network Address Translation (NAT) breaks this assumption, causing RPC connections to fail or behave unpredictably. NFSv4 addressed this, but earlier RPC services struggle behind NAT.
Security Weaknesses: Traditional RPC authentication mechanisms (AUTH_UNIX, AUTH_DES) provide minimal security. AUTH_UNIX sends credentials unencrypted, trusting that the client is honest about its user ID. AUTH_DES uses the obsolete DES encryption algorithm. Modern security requirements demand stronger authentication and encryption.
Tight Coupling: RPC enforces synchronous request-response patterns and strong coupling between client and server versions. This rigidity conflicts with modern distributed systems design favoring asynchronous messaging, loose coupling, and independent service evolution.
Web Services Revolution: The emergence of HTTP-based APIs (REST, SOAP, later GraphQL) provided simpler, firewall-friendly alternatives. HTTP uses well-known ports, traverses NAT seamlessly, supports widely-understood tools (browsers, curl, Postman), and integrates easily with web infrastructure.
Modern RPC Alternatives
Contemporary distributed applications typically use:
REST (Representational State Transfer): HTTP-based APIs using JSON or XML for data encoding. Conceptually simpler than RPC, stateless, cacheable, and universally accessible through standard web tools.
gRPC: Google's modern RPC framework using HTTP/2, Protocol Buffers for serialization, and generating client/server code similar to classic RPC but with better performance, multiplexing, and streaming support. Philosophically similar to Sun RPC but redesigned for contemporary requirements.
Message Queues: Asynchronous communication through systems like RabbitMQ, Apache Kafka, or AWS SQS. Decouples clients and servers, enables pub-sub patterns, and provides better fault tolerance.
GraphQL: Query language for APIs providing flexible data fetching, reducing over-fetching and under-fetching compared to REST.
Where RPC Persists
Despite declining prominence, RPC remains relevant in specific contexts:
- NFS Deployments: Many organizations continue using NFS for shared storage, particularly in HPC (high-performance computing) and traditional Unix shops. NFSv3 still requires the portmapper.
- Legacy Systems: Decades-old applications built on Sun RPC continue operating in production, requiring administrators to maintain RPC infrastructure.
- Embedded Systems: Some embedded and industrial control systems still use RPC for inter-process communication.
- Academic and Educational Environments: Teaching distributed systems concepts often includes RPC as a foundational technology.
Administrators working with these systems must understand RPC architecture, manage rpcbind, diagnose with rpcinfo, and configure appropriate firewall rules—skills that remain valuable even as new development gravitates toward HTTP-based APIs.
RPC Security Considerations
RPC services present significant security challenges that administrators must address through careful configuration and network controls.
Authentication Weaknesses
The default RPC authentication method, AUTH_UNIX, trusts the client to honestly report its user ID and group IDs. The server accepts these credentials without cryptographic verification. An attacker controlling a client machine can trivially forge credentials and access RPC services with arbitrary user privileges. This trust model assumes secure, controlled networks where all clients are trustworthy—an assumption rarely valid in modern environments.
AUTH_DES attempted to address this through DES-encrypted credentials, but DES is cryptographically broken and should never be used. Modern alternatives include Kerberos-based authentication (RPCSEC_GSS), which provides mutual authentication and encrypted sessions, though implementation complexity limits adoption.
Firewall Configuration
Securing RPC services behind firewalls requires allowing port 111 (portmapper) plus the dynamic port range used by RPC services. A common approach:
- Configure services to use fixed port ranges rather than random ephemeral ports
- Allow only these fixed ports through the firewall
- Restrict access by source IP address—only trusted networks can connect
For NFS, many administrators configure services to use specific ports:
# /etc/sysconfig/nfs (RHEL/CentOS) or /etc/default/nfs-kernel-server (Debian/Ubuntu)
STATD_PORT=4000
STATD_OUTGOING_PORT=4001
LOCKD_TCPPORT=4002
LOCKD_UDPPORT=4002
MOUNTD_PORT=4003
Then configure firewall rules to allow these specific ports from trusted networks. Without fixed ports, administrators must allow broad port ranges (32768-65535), substantially increasing attack surface.
Network Access Control
Best practices for RPC security include:
- Network Segmentation: Isolate RPC services on private networks inaccessible from the internet. Use VPNs for remote access.
- TCP Wrappers: Configure
/etc/hosts.allow and /etc/hosts.deny to restrict which hosts can connect to rpcbind and RPC services.
- SELinux/AppArmor: Mandatory access control systems can confine RPC services, limiting damage if compromised.
- Monitoring: Log RPC connection attempts and watch for suspicious patterns—connections from unexpected sources, failed authentication attempts, port scanning.
- Minimal Exposure: Only run necessary RPC services. Disable and stop unused services like NIS if not required.
Never expose RPC services directly to the internet without strong authentication, encryption, and careful access controls. The combination of weak authentication and dynamic ports makes RPC services attractive targets for attackers.
Practical RPC Troubleshooting Workflow
When facing RPC-related issues, follow this systematic diagnostic approach:
Step 1: Verify rpcbind Status
systemctl status rpcbind
Ensure the service is active and running. If not, start it and check logs for startup errors.
Step 2: Check Listening Ports
ss -tlnp | grep 111
Confirm rpcbind is listening on port 111 for both IPv4 and IPv6.
Step 3: List Registered Services
rpcinfo -p
Verify the required RPC service appears in the registration list. If missing, the service itself may not be running.
Step 4: Test Service Connectivity
rpcinfo -t localhost program_number version
Test whether the service responds to RPC calls. Successful response confirms the service is operational.
Step 5: Check Service-Specific Logs
journalctl -u service_name -n 100
Review logs for the specific RPC service (nfs-server, ypserv, etc.) to identify configuration errors or runtime failures.
Step 6: Verify Firewall Rules
firewall-cmd --list-all
Or for iptables:
iptables -L -n -v
Ensure port 111 and service-specific ports are allowed from required source networks.
Step 7: Test Network Connectivity
telnet server_ip 111
Or:
nc -v server_ip 111
Verify network path exists between client and server, with no intermediate firewalls blocking port 111.
Hands-On Practice: Checking RPC Services
The following simulations provide practical experience checking rpcbind status and using rpcinfo to discover available RPC services on Linux and Solaris systems.
Linux Simulation
Solaris Simulation
Here are the steps for checking the portmapper process on Solaris systems:
- You are logged in as root. Issue the command to verify that the rpcbind process is running on your system.
Solution:
ps -ef | grep rpcbind
This displays all processes with "rpcbind" in their name. Look for a line showing the rpcbind daemon process.
- You have verified that rpcbind is running. The rpcinfo program is a diagnostic tool that allows you to issue a test RPC call to an RPC server. If the server is running, it will send back a response. The information contained within this response informs you about the ports that have been mapped by rpcbind. Now, issue the command to learn about its status on your own system.
Solution:
/usr/sbin/rpcinfo -p localhost
Solaris places rpcinfo in /usr/sbin, so specify the full path if it's not in your PATH environment variable. The output lists all registered RPC services with their program numbers, versions, protocols, and ports.
- You have tested your own system. Let's assume that other Unix systems running RPC exist on your subnet. These systems include 192.168.19.63, 192.168.19.64, 192.168.19.93, 192.168.19.95, and 192.168.19.98. Now, use rpcinfo to test the system with the IP address of 192.168.19.63.
Remember: You have not set the PATH for this command, so remember to specify the location of the program (/usr/sbin/).
Solution:
/usr/sbin/rpcinfo -p 192.168.19.63
This queries the portmapper on the remote system and displays its registered RPC services. Compare the output to your local system to understand which services each host provides. If the command times out or reports "RPC: Program not registered," either rpcbind isn't running on the remote host, or network connectivity/firewall rules prevent access to port 111.
Summary
Remote Procedure Call (RPC) protocol, developed by Sun Microsystems and standardized in RFC 5531, revolutionized distributed computing by abstracting network communication complexity behind familiar procedural interfaces. The portmapper service (rpcbind) solves the discovery problem inherent in RPC's dynamic port allocation, providing a well-known endpoint (port 111) where clients can query for service locations.
Critical Unix services including NFS (network file sharing) and NIS (centralized user management) depend on RPC infrastructure. Administrators must verify rpcbind is running using systemd commands or process inspection, use rpcinfo to discover registered services and troubleshoot issues, and understand the security implications of exposing RPC services to networks.
While RPC's relevance has declined with the rise of HTTP-based APIs, RESTful services, and modern RPC frameworks like gRPC, it remains essential in legacy Unix environments, NFS deployments, and systems requiring backward compatibility. Understanding RPC architecture, managing rpcbind, and diagnosing with rpcinfo remain valuable skills for Linux administrators supporting contemporary heterogeneous infrastructures where decades-old and modern technologies coexist.
[1] Network File System (NFS): A distributed file system protocol originally developed by Sun Microsystems allowing remote access to files over a network as if they were local. NFS uses RPC for communication between clients and servers, transparently translating local file system calls into network requests. NFSv4 modernized the protocol with built-in security and no longer requires the portmapper.
[2] Network Information Service (NIS): Also known as Yellow Pages (yp), NIS is Sun Microsystems's client-server protocol for distributing system configuration data including user accounts, groups, hostnames, and other administrative information across multiple Unix systems. NIS operates entirely through RPC, with services like ypserv (NIS server) and ypbind (client binding agent) depending on the portmapper for discovery. Modern environments typically use LDAP instead of NIS for directory services.
