[edit] WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Change Log 1.8

From Obsolete Lustre Wiki
Revision as of 04:04, 23 April 2010 by Stinson1947 (Talk | contribs)
Jump to: navigation, search

(Updated: Feb 2010)

Contents

Changes from v1.8.1.1 to v1.8.2

Support for networks:

  • socklnd - any kernel supported by Lustre™
  • qswlnd - Qsnet kernel modules 5.20 and later
  • openiblnd - IbGold 1.8.2
  • o2iblnd - OFED 1.1, 1.2.0, 1.2.5, 1.3, 1.4.1 and 1.4.2
  • viblnd - Voltaire ibhost 3.4.5 and later
  • ciblnd - Topspin 3.2.0
  • iiblnd - Infiniserv 3.3 + PathBits patch
  • gmlnd - GM 2.1.22 and later
  • mxlnd - MX 1.2.10 or later
  • ptllnd - Portals 3.3 / UNICOS/lc 1.5.x, 2.0.x

Support for kernels:

  • 2.6.16.60-0.42.8 (SLES 10)
  • 2.6.27.39-0.3.1 (SLES11, i686 & x84_64 only)
  • 2.6.18-164.11.1.el5 (RHEL 5)
  • 2.6.18-164.6.1.0.1.el5 (OEL 5)

Client support for unpatched kernels: (see Patchless Client)

  • 2.6.16 - 2.6.30 vanilla (kernel.org)

Recommended e2fsprogs version: 1.41.6.sun1

The async journal commit feature (bug 19128) and the cancel lock before replay feature (bug 16774) are disabled by default.

Severity: minor
Description: should update lp_alive for non-router peers.

Severity: enhancement
Description: LNet router shuffler.

Severity: enhancement
Description: LNet fine grain routing support.

Severity: normal
Description: router checker stops working when system wall clock goes backward
Details: use monotonic timing source instead of system wall clock time.

Severity: enhancement
Description: avoid asymmetrical router failures

Severity: enhancement
Description: multiple-instance support for kptllnd

Severity: normal
Description: ksocknal_close_conn_locked connection race
Details: A race was possible when ksocknal_create_conn calls ksocknal_close_conn_locked for already closed conn.

Severity: enhancement
Description: port router pinger to userspace

Severity: normal
Description: kptllnd HELLO protocol deadlock
Details: kptllnd HELLO protocol doesn't run to completion in finite time

Severity: normal
Description: LNet selftest fixes and enhancements

Severity: enhancement
Description: allow a test node to be a member of multiple test groups

Severity: enhancement
Description: MXLND: eliminate hosts file, use arp for peer nic_id resolution
Details: an update from the upstream developer Scott Atchley.

Severity: enhancement
Description: Update RHEL5.4 kernel to 2.6.18-164.11.1.el5 and OEL5.4 kernel to 2.6.18-164.11.1.0.1.el5.

Severity: enhancement
Description: Update SLES11 kernel to 2.6.27.39-0.3.1.

Severity: enhancement
Description: Update supported SLES10 kernel to 2.6.16.60-0.42.8.

Severity: enhancement
Description: Update kernel to RHEL5.4 2.6.18-164.6.1.el5 and OEL5 2.6.18-164.6.1.0.1.el5(Both in-kernel OFED enabled).

Severity: enhancement
Description: Build kernels (RHEL5, OEL5 and SLES10/11) using the vendor's own kernel spec file.

Severity: enhancement
Description: Vanilla kernel 2.6.30 patchless client support.

Severity: major
Frequency: rare
Description: bad entry in directory xxx: inode out of bounds
Details: fix locking issue in the rename path which could race with any other operations updating the same directory.

Severity: enhancement
Description: Make watchdog timer messages to be more clear and descriptive.

Severity: normal
Description: cp -p command does not preserve the dates and timestamp
Details: mtime could be spoiled by a write callback

Severity: normal
Description: Clear imp_force_reconnect correctly in ptlrpc_connect_interpret()

Severity: normal
Description: Allow non-root access for "lfs check".
Details: Added a check in obd_class_ioctl() for OBD_IOC_PING_TARGET.

Severity: enhancement
Description: quotacheck performance/scaling issues
Details: reduce quotacheck time on empty filesystem by skipping uninit group.

Severity: enhancement
Description: Enhancement for lfs(1) command to use numeric uid/gid.

Severity: enhancement
Description: Adjust locks' extents on their first enqueue, so that at the time they get granted, there is no need for another pass through the queues since they are already shaped into the proper forms.

Severity: normal
Description: Fix mds_shrink_intent_reply()/mds_intent_policy() to pass correct arguments and prevent LBUG() in lustre_shrink_reply_v2().

Severity: normal
Description: Change tunefs.lustre and mkfs.lustre --mountfsoptions so that exactly the specified mount options are used. Leaving off any "mandatory" mount options is an error. Leaving off any default mount options causes a warning, but is allowed. Change errors=remount-ro from mandatory to default. Sanitize the mount string before storing it. Update man pages accordingly.

Severity: normal
Description: mds_getattr() should return 0, even if mds_fid2entry() fails with -ENOENT. Also fix in ptlrpc_expire_one_request() to print signed time difference.

Severity: enhancement
Description: Remove set_info(KEY_UNLINKED) from MDS/OSC

Severity: enhancement
Description: Clients can replay thousands of unused locks during recovery
Details: Don't replay unused locks (only read locks for now) during recovery. This feature is disabled by default and can be enabled by running the following command on the clients: lctl get_param ldlm.cancel_unused_locks_before_replay

Severity: normal
Description: can't stat file in some situation.
Details: improve initialize osc date when target is added to mds and ability to resend too big getattr request is client isn't have info about ost.

Severity: normal
Description: Prevent inconsistences between linux and lustre mount structures.
Details: Wait indefinitely in server_wait_finished() until mnt_count drops. Make the sleep interruptible.

Severity: enhancement
Description: Communicate OST degraded/readonly state via statfs to MDS
Details: Flags in the statfs returned from OSTs indicate whether the OST is in a degraded RAID state, or if the filesystem has turned read-only after a filesystem error is detected.

Severity: normal
Frequency: rare
Description: don't panic if EPROTO was hit when reading symlink
Details: correctly handling request reference in error cases.

Severity: normal
Frequency: common
Description: open sometimes returns ENOENT instead of EACCES
Details: checking permission should be part of open part of mds_open, not lookup part. so server should be set DISP_OPEN_OPEN disposition before starting permission check. Also not need revalidate dentry if client already have LOOKUP lock.

Severity: normal
Frequency: on servers with multiple network interfaces
Description: enable client interface failover
Details: When a child reconnects from another NID, properly update export nid hash position and ldlm reverse import.

Severity: enhancement
Description: implemented direct I/O with arbitrary (nonaligned) memory addresses and file offsets.

Severity: enhancement
Description: added more recovery timeout options.

Severity: enhancement
Description: added llapi_file_open, llapi_file_create, llapi_file_get_stripe man pages.

Severity: normal
Frequency: only on systems with clients writing to an OST on the same node
Description: Avoid deadlock for local client writes
Details: Use new OBD_BRW_MEMALLOC flag to notify OST about writes in the memory freeing context. This allows OST threads to set the PF_MEMALLOC flag on task structures in order to allocate memory from reserved pools and complete IO. Use GFP_HIGHUSER for OST allocations for non-local client writes, so that the OST threads generate memory pressure and allow inactive pages to be reclaimed.

Severity: normal
Frequency: rare
Description: lock ordering violation between &cli->cl_sem and _lprocfs_lock
Details: .move ldlm namespace creation in setup phase to avoid grab _lprocfs_lock with cli_sem held

Severity: normal
Frequency: only during format of test systems
Description: Unable to run several mkfs.lustre on loop devices at the same time
Details: mkfs.lustre returns error 256 on the concurrent loop devices formatting. The solution is to proper handle the error.

Severity: enhancement
Description: implement async create (obd_async_create) method for osc, to avoid too long waiting new ost objects with holding ldlm lock.

Severity: normal
Frequency: occasionally during network problems
Description: client not allowed to reconnect to OST because of active request
Details: abort bulk requests received by the OST once the client has timed out since the client will resend the request anyway. The client also now retries to reconnect to the same server if a connect request failed with EBUSY or -EAGAIN.

Severity: normal
Frequency: rare, if used wide striped file and one ost in down.
Description: don't return error if we created a subset of objects for file.
Details: lov_update_create_set() uses set->set_success as index for created objects, so if some requests failed, they will have hole at end of array and we can use qos_shrink_lsm for allocate correct lsm.

Severity: normal
Description: Slow stale export processing during normal start up
Details: The global mgc lock prevents OST setup to be run in parallel. Replace the global lock with a per-config_llog_data semaphore.

Severity: normal
Description: Out or order replies might be lost on replay
Details: In ptlrpc_retain_replayable_request if we cannot find retained request with tid smaller then one currently being added, add it to the start, not end of the list.

Severity: normal
Description: BUG: soft lockup - CPU#1 stuck for 10s! [ll_mdt_07:4523]
Details: add cond_resched() calls to avoid hogging the cpu for too long in the hash code. Make also lustre_hash_for_each_empty() more efficient.

Severity: enhancement
Description: Performance improvements for debug messages with D_RPCTRACE, D_LDLM, D_QUOTA options.

Severity: normal
Frequency: only with NFS export
Description: (lov_merge.c:74:lov_merge_lvb()) ASSERTION(spin_is_locked(&lsm->lsm_lock)) failed (SR 71691004)
Details: Fix a race in the nfs export code by populating inode info while the new inode is still locked

Severity: enhancement
Description: add a new file in procfs called force_lbug. Writting to this ile triggers a LBUG. Only for test purpose.

Severity: normal
Description: OOM killer causes node hang
Details: really interrupt the sleep in osc_enter_cache on signals

Severity: normal
Description: LustreError: 9153:0:(quota_context.c:622:dqacq_completion()) LBUG
Details: fix race during quota release on the slave.

Severity: enhancement
Description: smaller hash bucket sizes, cleanups
Details: increase hash table sizes and enabled rehashing for pools, uuid, nid & per-nid stats.

Severity: enhancement
Description: Add ldiskfs maxdirsize mount option
Details: add max_dir size mount option

Severity: normal
Description: panic in ll_statahead_thread
Details: prevent parent thread to be killed before its child

Severity: normal
Frequency: only with 16TB device
Description: unable to perform "mount -t lustre" of 16TB OST device
Details: Mounting 16TB LUNs failed due to three bugs in mkfs.lustre.

Severity: normal
Description: ASSERTION(atomic_read(&imp->imp_inflight) == 0) failed
Details: unregistering should be zero if no RPC inflight.

Severity: normal
Description: hyperion: Oops during metabench
Details: Correct the refcount of lov_request_set

Severity: enhancement
Description: Add mptlinux and nxge drivers to Lustre builds

Severity: enhancement
Description: Fix watchdog timer message to be more clear
Details: Make watchdog timer messages more clear and descriptive.

Severity: normal
Description: LNET soft lockups in socknal_cd thread
Details: don't hog CPU for active-connecting if another connd is accepting connecting-requst from the same peer

Severity: normal
Description: recovery-small test_17 hang
Details: Land several AT improvements & fixes.

Severity: normal
Description: MDS panic and hanging client processes
Details: Replace exp_ops_stats with exp_nid_stats->nid_stats

Severity: normal
Description: OSS stuck in recovery.
Details: fix race during recovery. class_unlink_export, class_set_export_delayed and target_queue_last_replay_reply may race while increasing/decreasing obd_recoverable_clients and obd_delayed_clients, causing recovery to wait forever.

Severity: enhancement
Description: add cascading_rw.c to lustre/tests

Severity: normal
Description: filter_last_id() NULL deref
Details: lprocfs_filter_rd_last_id() should check for the fully setup obd device, before proceeding further.

Severity: enhancement
Description: Loadgen improvements
Details: stacksize and locking fixes for loadgen

Severity: normal
Description: Quiet CERROR("dirty %d > system dirty_max %d\n"
Details: The atomic_read() allowing the atomic_inc() are not covered by a lock. Thus they may safely race and trip this CERROR() unless we add in a small fudge factor (+1).

Severity: enhancement
Description: shrink_slab: nr=-9223362083340912175
Details: fix spurious message from shrink_slab reporing negative nr

Severity: normal
Description: Quiet bogus previously committed transno error
Details: suppress the "server went back in time" error message which is always printed even in the common case after a client eviction

Severity: enhancement
Description: Parallel statfs() calls result in client eviction
Details: cache statfs data for 1s.

Severity: normal
Description: parallel-scale test_compilebench: @@@@@@ FAIL: compilebench failed: 1
Details: fix serveral issues in pinger code causing clients not to ping servers for too long, resulting in evictions.

Severity: normal
Description: e2fsck should warn when MMP update interval is extended
Details: print mmp_check_interval and make it possible to abort mount operation in case it takes too long.

Severity: normal
Description: mdsrate-create-large.sh, BUG: soft lockup - CPU#0 stuck for 10s!
Details: fix bug in the RHEL5's jbd2 callback patch.

Severity: normal
Description: drop number of active requests when queued for recovery
Details: Now that we take a reference on the original request instead of making a copy of it for recovery. We need to drop the number of active requests or the queued requests will prevent all request processing when they exceed (srv->srv_threads_running - 1).

Severity: enhancement
Description: refuse to invalidate operational quota files when they are in use
Details: an attempt to invalidate operational quota files on the quota master is not actually permitted by VFS (returning -EPERM), but we should not depend on that and should return the error earlier.

Severity: normal
Description: Applications stuck in jbd2_log_wait_commit during exit
Details: fix deadlock between kjournald2 trying to acquire the page lock owned by an ost_io thread waiting for journal commit.

Changes from v1.8.1 to v1.8.1.1

Support for networks:

  • socklnd - any kernel supported by Lustre™
  • qswlnd - Qsnet kernel modules 5.20 and later
  • openiblnd - IbGold 1.8.2
  • o2iblnd - OFED 1.1, 1.2.0, 1.2.5, 1.3 and 1.4.1
  • viblnd - Voltaire ibhost 3.4.5 and later
  • ciblnd - Topspin 3.2.0
  • iiblnd - Infiniserv 3.3 + PathBits patch
  • gmlnd - GM 2.1.22 and later
  • mxlnd - MX 1.2.1 or later
  • ptllnd - Portals 3.3 / UNICOS/lc 1.5.x, 2.0.x

Support for kernels:

  • 2.6.16.60-0.42.4 (SLES 10)
  • 2.6.27.29-0.1 (SLES11, i686 & x84_64 only)
  • 2.6.18-128.7.1.el5 (RHEL 5)

Client support for unpatched kernels: (see Patchless Client)

  • 2.6.16 - 2.6.27 vanilla (kernel.org)

Recommended e2fsprogs version: 1.41.6.sun1

File join has been disabled in this release, refer to bugzilla 16929

NFS export disabled when stack size < 8192. Since the NFSv4 export of Lustre file system with 4K stack may cause a stack overflow. For more information, please refer to bugzilla 17630

ext4 support for RHEL5 is experimental and thus should not be used in production.

Severity: enhancement
Description: Add OEL5 support.

Severity: enhancement
Description: Update kernel to SLES11 2.6.27.29-0.1.

Severity: major
Description: File checksum failures with OST read cache on
Details: Disable page poisoning when the bulk transfer has to be aborted because the client got evicted.

Severity: normal
Description: Don't allow make backward step on assiging osc next id.
Details: race between allocation next id and ll_sync thread can be cause of set wrong osc next id and can be kill valid ost objects.

Severity: enhancement
Description: Update kernel to RHEL5 2.6.18-128.7.1.el5.

Severity: enhancement
Description: Update kernel to SLES10 SP2 2.6.16.60-0.42.4.

Severity: normal
Description: Changes in raid5-large-io-rhel5.patch to calculate sectors properly

Severity: normal
Description: Increase the default BLK_DEF_MAX_SECTORS value for RHEL5 and SLES11

Severity: normal
Description: Do not send statfs() requests to OSTs disabled by administrator.
Details: Check in lov_prep_statfs_set() for non-NULL ltd_exp.

Severity: normal
Description: Error handling in osc_statfs_interpret() has been improved.
Details: Check in osc_statfs_interpret() for EBADR.

Severity: normal
Description: Do not update ctime for the deleted inode.
Details: Check in mds_reint_unlink() before calling fsfilt_setattr().

Severity: normal
Description: Increase of the size of the LDLM resource hash.
Details: Bump up RES_HASH_BITS=12.

Severity: normal
Description: correctly send lsm on open replay
Details: MDS is trust to LSM size on replay open, but client can set wrong size of lsm buffer.

Severity: normal
Description: Deadlock between filter_destroy() and filter_commitrw_write().
Details: filter_destroy() does not hold the DLM lock over the whole operation. If the DLM lock is dropped, filter_commitrw() can go through, causing the deadlock between page lock and i_mutex. The i_alloc_sem should also be hold in filter_destroy() while truncating the file.

Severity: normal
Description: truncate starts GFP_FS allocation under transaction causing deadlock
Details: ldiskfs_truncate calls grab_cache_page which may start page allocation under an open transaction. This may lead to calling prune_icache with consequent lustre reentrance.

Severity: normal
Frequency: only when down/upgrading the MDS to 1.6/1.8 while 1.8 clients are still up and when the OST pool feature is used
Description: interop testing got LBUG when run dd with OST pool :LustreError: 30032:0:(llite_lib.c:1913:ll_replace_lsm()) LBUG
Details: down/upgrading the MDS to a version that doesn't/does support OST pool can cause clients to crash because the lsm has changed behind their back.

Severity: normal
Description: missing tree_status on 1.8.1 RPM build
Details: make rpms failed due because the tree_status file is missing.

Severity: normal
Description: continuing LustreError "mds adjust qunit failed!"
Details: don't print message on the console when ->adjust_qunit fails.

Severity: normal
Description: don't increase ldlm timeout if previous client was evicted
Details: if a client doesn't respond to a blocking callback within the adaptive ldlm enqueue timeout, don't adjust the adaptive estimate when the lock is next granted.

Severity: normal
Description: ost is being unmounted w/o all writes to last_rcvd landing on disk. affects recovery negatively.
Details: make sure all exports have been properly destroyed by the zombie thread processed before stopping the target.

Severity: normal
Description: Performance degradation with O_DIRECT between 1.6 & 1.8.1 b190
Details: disable write barrier for ext4/SLES11.

Severity: normal
Description: Kernel panic - not syncing: Out of memory and no killable processes... on OSS when iozone
Details: fix memory leak in the journal checksum patch.

Severity: normal
Description: group quota "too many blocks" OSS crashes
Details: we should keep the same uid/gid for lquota_chkquota() and lquota_pending_commit()

Severity: normal
Description: LustreError: 9153:0:(quota_context.c:622:dqacq_completion()) LBUG
Details: don't LBUG on release quota error. Just a workaround until the problem is understood.

Changes from v1.8.0.1 to v1.8.1

Support for networks:

  • socklnd - any kernel supported by Lustre
  • qswlnd - Qsnet kernel modules 5.20 and later
  • openiblnd - IbGold 1.8.2
  • o2iblnd - OFED 1.1, 1.2.0, 1.2.5, 1.3 and 1.4.1
  • viblnd - Voltaire ibhost 3.4.5 and later
  • ciblnd - Topspin 3.2.0
  • iiblnd - Infiniserv 3.3 + PathBits patch
  • gmlnd - GM 2.1.22 and later
  • mxlnd - MX 1.2.1 or later
  • ptllnd - Portals 3.3 / UNICOS/lc 1.5.x, 2.0.x

Support for kernels:

  • 2.6.16.60-0.39.3 (SLES 10)
  • 2.6.27.23-0.1 (SLES11, i686 & x84_64 only)
  • 2.6.18-128.1.14.el5 (RHEL 5)

Client support for unpatched kernels: (see Patchless Client)

  • 2.6.16 - 2.6.27 vanilla (kernel.org)

Recommended e2fsprogs version: 1.41.6.sun1

File join has been disabled in this release, refer to bugzilla 16929

NFS export disabled when stack size < 8192. Since the NFSv4 export of Lustre filesystem with 4K stack may cause a stack overflow. For more information, please refer to bugzilla 17630

ext4 support for RHEL5 is experimental and thus should not be used in production.

Severity: normal
Description: router_proc.c is rewritten to use sysctl-interface for parameters residing in /proc/sys/lnet

Severity: normal
Description: LNet selftest fixes and enhancements

Severity: enhancement
Description: MXLND: eliminate hosts file, use arp for peer nic_id resolution
Details: an update from the upstream developer Scott Atchley.

Severity: enhancement
Description: add a new LND optiion to control peer buffer credits on routers

Severity: normal
Description: Fixing deadlock in usocklnd
Details: A deadlock was possible in usocklnd due to race condition while tearing connection down. The problem resulted from erroneous assumption that lnet_finalize() could have been called holding some lnd-level locks.

Severity: major
Description: Protocol V2 of o2iblnd
Details: o2iblnd V2 has several new features:

  • map-on-demand: map-on-demand is disabled by default, it can be enabled by using modparam "map_on_demand=@value@", @value@ should >= 0 and < 256, 0 will disable map-on-demand, any other valid value will enable map-on-demand.
Oi2blnd will create FMR or physical MR for RDMA if fragments of RD > @value@.
Enable map-on-demand will take less memory for new connection, but a little more CPU for RDMA.
  • iWARP : to support iWARP, please enable map-on-demand, 32 and 64 are recommanded value. iWARP will probably fail for value >=128.
  • OOB NOOP message: to resolve deadlock on router.
  • tunable peer_credits_hiw: (high water to return credits), default value of peer_credits_hiw equals to (peer_credits -1), user can change it between peer_credits/2 and (peer_credits - 1). Lower value is recommended for high latency network.
  • tunable message queue size: it always equals to peer_credits, higher value is recommended for high latency network.
  • It's compatible with earlier version of o2iblnd

Severity: normal
Description: Fixing 'running out of ports' issue
Details: Add a delay before next reconnect attempt in ksocklnd in the case of lost race. Limit the frequency of query-requests in lnet. Improved handling of 'dead peer' notifications in lnet.

Severity: normal
Description: Change ptllnd timeout and watchdog timers
Details: Add ptltrace_on_nal_failed and bump ptllnd timeout to match Portals wire timeout.

Severity: normal
Description: One down Lustre FS hangs ALL mounted Lustre filesystems
Details: Shared routing enhancements - peer health detection.

Severity: minor
Description: IB path MTU mistakenly set to 1st path MTU when ib_mtu is off
Details: See comment 46 in bug 11245 for details - it's indeed a bug introduced by the original 11245 fix.

Severity: minor
Description: uptllnd credit overflow fix
Details: kptl_msg_t::ptlm_credits could be overflown by uptllnd since it is only a __u8.

Severity: major
Description: socklnd protocol version 3
Details: With current protocol V2, connections on router can be blocked and can't receive any incoming messages when there is no more router buffer, so ZC-ACK can't be handled (LNet message can't be finalized) and will cause deadlock on router. Protocol V3 has a dedicated connection for emergency messages like ZC-ACK to router, messages on this dedicated connection don't need any credit so will never be blocked. Also, V3 can send keepalive ping in specified period for router healthy checking.

Severity: minor
Frequency: in recovery
Description: don't mix llog inodes with normal.
Details: allocate inodes for log in last inode group

Severity: normal
Description: Deadlock between filter_destroy() and filter_commitrw_write().
Details: filter_destroy() does not hold the DLM lock over the wholeoperation. If the DLM lock is dropped, filter_commitrw() can gothrough, causing the deadlock between page lock and i_mutex.

Severity: enhancement
Description: Description: Update

Severity: normal
Frequency: with 1.8 server and 1.6 clients
Description: correctly shrink reply for avoid send too big message to client.
Details: 1.8 mds is allocate to big buffer to LOV EA data and this produce some problems with sending this reply to 1.6 client.

Severity: normal
Description: Repeated atomic allocation failures.
Details: Use GFP_HIGHUSER | __GFP_NOMEMALLOC flags for memory allocations to generate memory pressure and allow reclaiming of inactive pages. At the same time, do not allow to exhaust emergency pools. For local clients the use of GFP_NOFS will be introduced in 1.8.2

Severity: enhancement
Description: Update kernel to RHEL5 2.6.18-128.1.14.el5.

Severity: enhancement
Description: Add support for SLES11 2.6.27.23-0.1.

Severity: enhancement
Description: Update client support to vanila kernels up to 2.6.27.

Severity: enhancement
Description: Update kernel to SLES10 SP2 2.6.16.60-0.37.

Severity: enhancement
Description: Compile with -Werror by default for i686 and x86_64.

Severity: normal
Description: resolve race between obd_disconnect and class_disconnect_exports
Details: if obd_disconnect will be called to already disconnected export he forget release one reference and osc module can't unloaded.

Severity: enhancement
Description: move AT tunable parameters for more consistent usage
Details: add AT tunables under /proc/sys/lustre, add to conf_param parsing

Severity: normal
Description: correctly skip time estimate if in recovery
Details: rq_send_state insn't bitmask so using bitwise ops is forbid.

Severity: normal
Description: OSS DeadLock
Details: Use trylock to prevent deadlock when shrink icache.

Severity: enhancement
Description: Allow tuning service thread via /proc
Details: For each service a new /proc/fs/lustre/{service}/*/thread_{min,max,started} entry is created that can be used to set min/max thread counts, and get the current number of running threads.

Severity: enhancement
Description: Add state history info file, enhance import info file
Details: Track import connection state changes in a new osc/mdc proc file; add overview-type data to the osc/mdc import proc file.

Severity: normal
Description: Reduce small size read RPC
Details: Set read-ahead limite for every file and only do read-ahead when available read-ahead pages are bigger than 1M to avoid small size read RPC.

Severity: normal
Description: free_entry erroneously used groups_free instead of put_group_info

Severity: enhancement
Description: Make read-ahead stripe size aligned.

Severity: enhancement
Description: MDS create should not wait for statfs RPC while holding DLM lock.

Severity: normal
Frequency: rare, connect and disconnect target at same time
Description: ASSERTION(atomic_read(&imp->imp_inflight) == 0
Details: don't call obd_disconnect under lov_lock. this long time operation and can block ptlrpcd which answer to connect request.

Severity: normal
Frequency: start MDS on uncleanly shutdowned MDS device
Description: ll_sync thread stay in waiting mds<>ost recovery finished
Details: stay in waiting mds<>ost recovery finished produce random bugs due race between two ll_sync thread for one lov target. send ACTIVATE event only if connect realy finished and import have FULL state.

Severity: normal
Frequency: start MDS on uncleanly shutdowned MDS device
Description: aborting recovery hang on MDS
Details: don't throttle destroy RPCs for the MDT.

Severity: low
Description: Slow reads beyond 8Tb offsets.
Details: Page index integer overflow in ll_read_ahead_page

Severity: normal
Description: MSG_CONNECT_INITIAL is not set on the initial MDS->OST connect.
Details: MSG_CONNECT_INITIAL is not set on the initial MDS->OST connect. As a conseqence, the patch from bug 18224 is not operational and the MDS export cannot be reused on the OSTs until it gets evicted.

Severity: major
Frequency: rare, only if using MMP with Linux RAID
Description: MMP doesn't work with Linux RAID
Details: While using HA for Lustre servers with Linux RAID, it is possible that MMP will not detect multiple mounts. To make this work we need to unplug the device queue in RAID when the MMP block is being written. Also while reading the MMP block, we should read it from disk and not the cached one.

Severity: minor
Frequency: rare, during recovery
Description: Assertion failure in ldlm_lock_put
Details: Do not put cancelled locks into replay list, hold references on locks in replay list

Severity: normal
Description: 1.6.5 mdsrate performance is slower than 1.4.11/12 (MDS is not cpu bound!)
Details: create_count always drops to the min value (=32) because grow_count is being changed before the precreate RPC completes.

Severity: normal
Frequency: Only in RHEL5 when mounting multiple ext3 filesystems simultaneously
Description: kmem_cache_create: duplicate cache jbd_4k" error message
Details: add proper locking for creation of jbd_4k slab cache

Severity: normal
Description: MMP check in ext3_remount() fails without displaying any error
Details: When multiple mount protection fails during remount, proper error should be returned

Severity: Low
Description: Rare Client crash on resend if the file was deleted.
Details: When file is opened, but open reply is lost and file is subsequently deleted before resend, resend processing logic breaks trying to open the file again, should not try to open.

Severity: high
Description: add check for >8TB ldiskfs filesystems
Details: ext3-based ldiskfs does not support greater than 8TB LUNs. Don't allow >8TB ldiskfs filesystems to be mounted without force_over_8tb mount option

Severity: normal
Description: Client locked up when running multiple instances of an app. on multiple mount points
Details: ll_shrink_cache() can sleep while holding the ll_sb_lock. Convert ll_sb_lock to a read/write semaphore to fix the problem.

Severity: normal
Description: Cannot acces an NFS-mounted Lustre filesystem
Details: An NFS client cannot access the Lustre filesystem NFS-mounted from a Lustre-client exporting the Lustre filesystem via NFS.

Severity: normal
Description: panic in ll_statahead_thread
Details: grab dentry reference in parent process.

Changes from v1.8.0 to v1.8.0.1

Support for networks:

  • socklnd - any kernel supported by Lustre
  • qswlnd - Qsnet kernel modules 5.20 and later
  • openiblnd - IbGold 1.8.2
  • o2iblnd - OFED 1.1, 1.2.0, 1.2.5, 1.3 and 1.4.1
  • viblnd - Voltaire ibhost 3.4.5 and later
  • ciblnd - Topspin 3.2.0
  • iiblnd - Infiniserv 3.3 + PathBits patch
  • gmlnd - GM 2.1.22 and later
  • mxlnd - MX 1.2.1 or later
  • ptllnd - Portals 3.3 / UNICOS/lc 1.5.x, 2.0.x

Support for kernels:

  • 2.6.16.60-0.37 (SLES 10)
  • 2.6.18-128.1.6.el5 (RHEL 5)
  • 2.6.22.14 vanilla (kernel.org)

Client support for unpatched kernels: (see Patchless Client)

  • 2.6.16 - 2.6.22 vanilla (kernel.org)

Recommended e2fsprogs version: 1.40.11-sun1

File join has been disabled in this release, refer to bugzilla 16929

A new Lustre ADIO driver is available for MPICH2-1.0.7.

NFS export disabled when stack size < 8192. Since the NFSv4 export of Lustre filesystem with 4K stack may cause a stack overflow. For more information, please refer to bugzilla 17630

Severity: major
Description: Handle new CM events in OFED 1.4

Severity: enhancement
Description: Update OFED release to 1.4.1 RC4

Severity: enhancement
Description: Update kernel to SLES10 SP2 2.6.16.60-0.37.

Severity: enhancement
Description: Update to RHEL5.3 kernel-2.6.18-128.1.6.el5.

Severity: enhancement
Description: Add support for OFED 1.4.1.

Severity: enhancement
Description: build ofed 1.4.1 with mlx4_en (Mellanox ConnectX drivers in 10GbE mode) enabled

Severity: major (SLES10/OFED 1.4.1 only)
Description: BUG: soft lockup - CPU#7 stuck for 10s! [ll_imp_inval:18451]
Details: ll_imp_inval can sleep on waiting for a semaphore while holding a spinlock. Convert lco_lock to a semaphore to address the problem.

Severity: major, only with big OST
Description: Very poor metadata performance on Infiniband lustre configuration
Details: OST object precreation becomes very slow on big OSTs. This is due to the ialloc patch spending too much time scanning groups.

Severity: normal
Frequency: during recovery
Description: don't mix llog inodes with normal.
Details: allocate inodes for log in last inode group

Severity: major
Frequency: rare
Description: fix lqs' reference which won't be put in some situations
Details: This patch fixes:

    1. In quota_check_common(), this function will check quota
       for user and group, but only send one return via "pending".
       In most cases, the pendings should be same. But that is not
       always the case.
    2. If quotaoff runs between lquota_chkquota() and
       lquota_pending_commit(), the same thing will happen too.
       That is why it comes:
       -        if (!ll_sb_any_quota_active(qctxt->lqc_sb))
       -                RETURN(0);

Severity: enhancement
Description: improve lctl set/get_param
Details: handle the bad options, support more than one arguments, add '-F' option to append the indicator to the parameters.

Changes from v1.6.7.1 to v1.8.0

Support for networks:

  • socklnd - any kernel supported by Lustre
  • qswlnd - Qsnet kernel modules 5.20 and later
  • openiblnd - IbGold 1.8.2
  • o2iblnd - OFED 1.1, 1.2.0, 1.2.5, and 1.3.1
  • viblnd - Voltaire ibhost 3.4.5 and later
  • ciblnd - Topspin 3.2.0
  • iiblnd - Infiniserv 3.3 + PathBits patch
  • gmlnd - GM 2.1.22 and later
  • mxlnd - MX 1.2.1 or later
  • ptllnd - Portals 3.3 / UNICOS/lc 1.5.x, 2.0.x

Support for kernels:

  • 2.6.16.60-0.31 (SLES 10)
  • 2.6.18-92.1.17.el5 (RHEL 5)
  • 2.6.22.14 vanilla (kernel.org)

Client support for unpatched kernels: (see Patchless Client)

  • 2.6.16 - 2.6.22 vanilla (kernel.org)

Recommended e2fsprogs version: 1.40.11-sun1

File join has been disabled in this release, refer to bugzilla 16929

A new Lustre ADIO driver is available for MPICH2-1.0.7.

NFS export disabled when stack size < 8192. Since the NFSv4 export of Lustre filesystem with 4K stack may cause a stack overflow. For more information, please refer to bugzilla 17630

Severity: minor
Description: minor fixes and cleanups
Details: use EXT_UNSET_BLOCK to avoid confusion with EXT_MAX_BLOCK. Initialize 'ix' variable in extents patch to stop compiler warning.

Severity: feature
Description: update FIEMAP ioctl to match upstream kernel version
Details: the FIEMAP block-mapping ioctl had a prototype version in ldiskfs 3.0.7 but this release updates it to match the interface in the upstream kernel, with a new ioctl number.

Severity: normal
Frequency: only if MMP is active and detects filesystem is in use
Description: if MMP startup fails, an oops is triggered
Details: if ldiskfs mounting doesn't succeed the error handling doesn't clean up the MMP data correctly, causing an oops.

Severity: enhancement
Description: Caching OSS
Details: introduce data caching on the OSS. The OSS now relies on the linux kernel page cache to keep recently accessed data in memory. It is worth noting that all write requests are still flushed synchronously as in lustre 1.6.

Severity: enhancement
Description: version based recovery
Details: introduce finer grained recovery able to detect transaction dependencies and can deal with transaction gaps caused by clients failing at the same time as the server.

Severity: enhancement
Description: Enable adaptive timeouts by default
Details: The Lustre timeout value in /proc/sys/lustre/timeout is now managed dynamically based on server load and should not need to be tuned manually based on cluster size. This allows Lustre to work under a wider variety of system sizes and loads, without unnecessarily causing lengthy recovery times.

Severity: enhancement
Description: Add OST Pools support
Details: File striping can now be set to use an arbitrary pool of OSTs

Severity: enhancement
Description: add lazystatfs mount option to allow statfs(2) to skip down OSTs
Details: allow skip disconnected ost for send statfs request and hide error in this case.

Severity: normal
Frequency: rare, on llog test 6
Description: don't allow connect to already connected import
Details: allowing connect to already connected import is hide connecting problem.

Severity: normal
Frequency: rare, connect and disconnect target at same time
Description: ASSERTION(atomic_read(&imp->imp_inflight) == 0
Details: don't call obd_disconnect under lov_lock. this long time operation and can block ptlrpcd which answer to connect request.

Severity: normal
Frequency: rare, on failed llog setup
Description: don't leak obd reference on failed llog setup
Details: for failed llog setup - mgc forget call class_destroy_import for client import, move destroy import to more generic place.

Severity: normal
Frequency: rare
Description: allow kill process which wait statahead result
Details: for some reasons 'ls' can stick in waiting result from statahead, in this case need way for kill this process.

Severity: normal
Frequency: rare
Description: don't lose wakeup for imp_recovery_waitq
Details: recover_import_no_retry or invalidate_import and import_close can both sleep on imp_recovery_waitq, but we was send only one wakeup to sleep queue.

Severity: normal
Frequency: rare, at shutdown
Description: panic at umount
Details: llap_shrinker can be raced with killing super block from list and this produce panic with access to already freeded pointer

Severity: normal
Frequency: rare
Description: panic in mds_open
Details: don't confuse mds_finish_transno() with PTR_ERR(-ENOENT)

Severity: normal
Frequency: rare
Description: stuck in cache_remove_extent() or panic with accessing to already freed look.
Details: release lock refernce only after add page to pages list.

Severity: normal
Frequency: start MDS on uncleanly shutdowned MDS device
Description: ll_sync thread stay in waiting mds<>ost recovery finished
Details: stay in waiting mds<>ost recovery finished produce random bugs due race between two ll_sync thread for one lov target. send ACTIVATE event only if connect realy finished and import have FULL state.

Severity: normal
Frequency: always with long access acl
Description: mds can't pack reply with long acl.
Details: mds don't control size of acl but they limited by reint/getattr reply buffer.

Severity: normal
Frequency: start MDS on uncleanly shutdowned MDS device
Description: aborting recovery hang on MDS
Details: don't throttle destroy RPCs for the MDT.

Severity: major
Frequency: on remount
Description: external journal device not working after the remount
Details: clear dev_rdonly flag for external journal devices in blkdev_put()

Severity: minor
Frequency: rare
Description: shutdown vs evict race
Details: client_disconnect_export vs connect request race. if client will evicted at this time - we start invalidate thread without referece to import and import can be freed at same time.

Severity: minor
Frequency: always
Description: shrink LOV EAs before replying
Details: correctly adjust LOV EA buffer for reply.

Severity: normal
Frequency: rare
Description: don't skip ost target if they assigned to file
Details: Drop slow OSCs if we can, but not for requested start idx. This means "if OSC is slow and it is not the requested start OST, then it can be skipped, otherwise skip it only if it is inactive/recovering/out-of-space.

Severity: enhancement
Description: Update to RHEL5 kernel-2.6.18-92.1.17.el5.

Severity: enhancement
Description: Update to SLES10 SP2 kernel-2.6.16.60-0.31.

Severity: normal
Frequency: rare, need acl's on inode.
Description: client can't handle ost additional correctly
Details: if ost was added after client connected to mds client can have hit lnet_try_match_md ... to big messages to wide striped files. in this case need teach client to handle config events about add lov target and update client max ea size at that event.

Severity: normal
Frequency: Create a symlink file with a very long name
Description: ldlm_cancel_pack()) ASSERTION(max >= dlm->lock_count + count)
Details: If there is no extra space in the request for early cancels, ldlm_req_handles_avail() returns 0 instead of a negative value.

Severity: major
Frequency: rare
Description: mds is deadlocked
Details: in rare cases, inode in catalog can have i_no less than have parent i_no, this produce wrong order for locking during open, and parallel unlink can be lock open. this need teach mds_open to grab locks in resource id order, not at parent -> child order.

Severity: enhancement
Description: Add /proc entry for import status
Details: The mdc, osc, and mgc import directories now have an import directory that contains useful import data for debugging connection problems.

Severity: enhancement
Description: Re-disable certain /proc logging
Details: Enable and disable client's offset_stats, extents_stats and extents_stats_per_process stats logging on the fly.

Severity: major
Frequency: Only on FC kernels 2.6.22+
Description: oops in statahead
Details: Do not drop reference count for the dentry from VFS when lookup, VFS will do that by itself.

Severity: enhancement
Description: Generic /proc file permissions
Details: Set /Proc file permissions in a more generic way to enable non-root users operate on some /proc files.

Severity: major
Description: Hitting mdc_commit_close() ASSERTION
Details: Properly handle request reference release in ll_release_openhandle().

Severity: normal
Description: only patchless client
Details: add workaround for race between add/remove dentry from hash

Severity: enhancement
Description: Allow OST glimpses to return PW locks

Severity: minor
Description: LBUG when llog conf file is full
Details: When llog bitmap is full, ENOSPC should be returned for plain log.

Severity: normal
Description: Prevent import from entering FULL state when server in recovery

Severity: major
Description: service mount cannot take device name with ":"
Details: Only when device name contains ":/" will mount treat it as client mount.

Severity: normal
Frequency: rare
Description: replace ptlrpcd with the statahead thread to interpret the async statahead RPC callback

Severity: normal
Frequency: on recovery
Description: I/O failures after umount during fail back
Details: if client reconnected to restarted server we need join to recovery instead of find server handler is changed and process self eviction with cancel all locks.

Severity: normal
Description: Kernel BUG tries to release flock
Details: Lustre does not destroy flock lock before last reference goes away. So always drop flock locks when client is evicted and perform unlock regardless of successfulness of speaking to MDS.

Severity: enhancement
Description: Upcall on Lustre log has been dumped
Details: Allow for a user mode script to be called once a Lustre log has been dumped. It passes the filename of the dumped log to the script, the location of the script can be specified via /proc/sys/lnet/debug_log_upcall.

Severity: minor
Frequency: rare
Description: avoid messages about idr_remove called for id that is not allocated
Details: Move assigment s_dev for clustered nfs to end of initialization, for avoid problem with error handling.

Severity: minor
Frequency: rare
Description: avoid Already found the key in hash [CONN_UNUSED_HASH] messages
Details: When connection is reused this not moved from CONN_UNUSED_HASH into CONN_USED_HASH and this prodice warning when put connection again in unused hash.

Severity: normal
Frequency: rare
Description: avoid ASSERTION(client_stat->nid_exp_ref_count == 0) failed
Details: release reference to stats when client disconnected, not when export destroyed for avoid races when client destroyed after main ost export.

Severity: normal
Description: more cleanup in mds_lov
Details: add workaround for get valid ost count for avoid warnings about drop too big messages, not init llog cat under semphore which can be blocked on reconnect and break normal replay, fix access to wrong pointer.

Severity: enhancement
Description: Export bytes_read/bytes_write count on OSC/OST.

Severity: normal
Description: Early reply size mismatch, MGC loses connection
Details: Apply the MGS_CONNECT_SUPPORTED mask at reconnect time so the connect flags are properly negotiated.

Severity: normal
Description: Properly propagate oinfo flags from lov to osc for statfs
Details: restore missing copy oi_flags to lov requests.

Severity: normal
Description: exports in /proc are broken
Details: recreate /proc entries for clients when they reconnect.

Severity: enhancement
Description: Add man pages for llobdstat(8), llstat(8), plot-llstat(8), l_getgroups(8), lst(8), routerstat(8)
Details: included man pages for llobdstat(8), llstat(8), plot-llstat(8), l_getgroups(8), lst(8), routerstat(8)

Severity: enhancement
Description: Implement lustre ll_show_options method.

Severity: normal
Description: exports in /proc are broken
Details: recreate /proc entries for clients when they reconnect.

Severity: normal
Description: don't fail open with -ERANGE
Details: if client connected until mds will be know about real ost count get LOV EA can be fail because mds not allocate enougth buffer for LOV EA.

Severity: normal
Description: Resolve device initialization race
Details: Prevent proc handler from accessing devices added to the obd_devs array but yet be intialized.

Severity: enhancement
Description: configure's --enable-quota should check the kernel .config for CONFIG_QUOTA
Details: configure is terminated if --enable-quota is passed but no quota support is in kernel

Severity: normal
Frequency: rare, on PPC clients
Description: don't swab ost objects in response about directory, because this not exist.
Details: bug similar bug 14856, but in different function.

Severity: enhancement
Description: lfs quota tool enhancement
Details: added units specifiers support for setquota, default to current uid/gid for quota report, short quota stats by default, nonpositional parameters for setquota, added llapi_quotactl manual page.

Severity: enhancement
Description: *optional* service tags registration
Details: if the "service tags" package is installed on a Lustre node When the filesystem is mounted, a local-node service tag will be created. See http://inventory.sun.com/ for more information about the Service Tags asset management system.

Severity: normal
Description: Client runs out of low memory
Details: Consider only lowmem when counting initial number of llap pages

Severity: normal
Frequency: occasional
Description: add refcount for osc callbacks, so avoid panic on shutdown

Severity: normal
Frequency: testing only
Description: sanity test 65a fails if stripecount of -1 is set
Details: handle -1 striping on filesystem in ll_dirstripe_verify

Severity: normal
Frequency: only in unusual configurations
Description: Kernel panic with find ost index.
Details: lov_obd have panic if some OST's have sparse indexes.

Severity: major
Frequency: rarely, if filesystem is mounted with -o flock
Description: do not process already freed flock
Details: flock can possibly be freed by another thread before it reaches to ldlm_flock_completion_ast.

Severity: normal
Frequency: rarely, if filesystem is mounted with -o flock
Description: LBUG during stress test
Details: Need properly lock accesses the flock deadlock detection list.

Severity: minor
Frequency: rarely, if binaries are being run from Lustre
Description: oops in page fault handler
Details: kernel page fault handler can return two special 'pages' in error case, don't try dereference NOPAGE_SIGBUS and NOPAGE_OMM.

Severity: minor
Frequency: rarely, during shutdown
Description: timeout with invalidate import.
Details: ptlrpcd_check call obd_zombie_impexp_cull and wait request which should be handled by ptlrpcd. This produce long age waiting and -ETIMEOUT ptlrpc_invalidate_import and as result LASSERT.

Severity: normal
Frequency: rarely
Description: ASSERTION(CheckWriteback(page,cmd)) failed
Details: badly clear PG_Writeback bit in ll_ap_completion can produce false positive assertion.

Severity: normal
Frequency: only with broken builds/installations
Description: no LBUG if lquota.ko and fsfilt_ldiskfs.ko are different versions
Details: just return an error to a user, put a console error message

Severity: enhancement
Description: enable MGS and MDT services start separately
Details: add a 'nomgs' option in mount.lustre to enable start a MDT with a co-located MGS without starting the MGS, which is a complement to 'nosvc' mount option.

Severity: normal
Frequency: always, on big-endian systems
Description: cleanup in ptlrpc code, related to PPC platform
Details: store magic in native order avoid panic's in recovery on PPC node and forbid from this error in future. Also fix possibly of twice swab data. Fix get lov striping to userpace.

Severity: normal
Frequency: rarely, if replay get lost on server
Description: server incorrectly drop resent replays lead to recovery failure.
Details: do not drop replay according to msg flags, instead we check the per-export recovery request queue for duplication of transno.

Severity: normal
Frequency: after recovery
Description: precreate to many object's after del orphan.
Details: del orphan st in oscc last_id == next_id and this triger growing count of precreated objects. Set flag LOW to skip increase count of precreated objects.

Severity: normal
Frequency: after recovery
Description: precreate to many object's after del orphan.
Details: del orphan st in oscc last_id == next_id and this triger growing count of precreated objects. Set flag LOW to skip increase count of precreated objects.

Severity: normal
Frequency: rare, on clear nid stats
Description: ASSERTION(client_stat->nid_exp_ref_count == 0)
Details: when clean nid stats sometimes try destroy live entry, and this produce panic in free.

Severity: major
Frequency: occasionally since 1.6.4
Description: Stack overflow during MDS log replay
Details: ease stack pressure by using a thread dealing llog_process.

Severity: minor
Frequency: very rare
Description: MDT cannot be unmounted, reporting "Mount still busy"
Details: Mountpoint references were being leaked during open reply reconstruction after an MDS restart. Drop mountpoint reference in reconstruct_open() and free dentry reference also.

Severity: normal
Frequency: rare
Description: wait until IO finished before start new when do lock cancel.
Details: VM protocol want old IO finished before start new, in this case need wait until PG_writeback is cleared until check dirty flag and call writepages in lock cancel callback.

Severity: normal
Frequency: rare
Description: mds_mfd_close() ASSERTION(rc == 0)
Details: In mds_mfd_close(), we need protect inode's writecount change within its orphan write semaphore to prevent possible races.

Severity: minor
Frequency: rare, on shutdown ost
Description: don't hit live lock with umount ost.
Details: shrink_dcache_parent can be in long loop with destroy dentries, use shrink_dcache_sb instead.

Severity: minor
Frequency: only when echo_client is used
Description: don't panic with use echo_client
Details: echo client pass NULL as client nid pointer and this produce NULL pointer dereference.

Severity: normal
Frequency: Always on 32-bit PowerPC systems
Description: fix build on PPC32
Details: compile code with -m64 flag produce wrong object file for PPC32.

Severity: normal
Frequency: rare
Description: MDS LBUG: ASSERTION(!IS_ERR(dchild))
Details: In reconstruct_* functions, LASSERTs on both the data supplied by a client, and the data on disk are dangerous and incorrect. Change them with client eviction.

Severity: enhancement
Description: skiplist implementation simplification
Details: skiplists are used to group compatible locks on granted list that was implemented as tracking first and last lock of each lock group the patch changes that to using doubly linked lists

Severity: normal
Description: delete compatibility for 32bit qdata
Details: as planned, when lustre is beyond b1_8, lquota won't support 32bit qunit. That means servers of b1_4 and servers of b1_8 can't be used together if users want to use quota.

Severity: normal
Frequency: only with administrator action
Description: mount failure if config log has invalid conf_param setting
Details: If administrator specified an incorrect configuration parameter with "lctl conf_param" this would cause an error during future client mounts. Instead, ignore the bad configuration parameter.

Severity: normal
Frequency: blocks per group < blocksize*8 and uninit_groups is enabled
Description: ldiskfs error: XXX blocks in bitmap, YYY in gd
Details: If blocks per group is less than blocksize*8, set rest of the bitmap to 1.

Severity: major
Frequency: Application do stride read on lustre
Description: The read performance will drop a lot if the application does stride read.
Details: Because the stride_start_offset are missing in stride read-ahead, it will cause clients read a lot of unused pages in read-ahead, then the read-performance drops.

Severity: normal
Description: more ldlm soft lockups
Details: In ldlm_resource_add_lock(), call to ldlm_resource_dump() starve other threads from the resource lock for a long time in case of long waiting queue, so change the debug level from D_OTHER to the less frequently used D_INFO.

Severity: enhancement
Description: add -gid, -group, -uid, -user options to lfs find

Severity: enhancement
Description: ll_recover_lost_found_objs - recover objects in lost+found
Details: OST corruption and subsequent e2fsck can leave objects in the lost+found directory. Using the "ll_recover_lost_found_objs" tool, these objects can be retrieved and data can be salvaged by using the object ID saved in the fid EA on each object.

Severity: minor
Frequency: rare
Description: this bug _only_ happens when inode quota limitation is very low (less than 12), so that inode quota unit is 1 at initialization.
Details: if remaining quota equates 1, it is a sign to demonstate that quota is effective now. So least quota qunit should be 2.

Severity: normal
Description: Hung threads in invalidate_inode_pages2_range
Details: The direct IO path doesn't call check_rpcs to submit a new RPC once one is completed. As a result, some RPCs are stuck in the queue and are never sent.

Severity: normal
Description: Procfs and llog threads access destoryed import sometimes.
Details: Sync the import destoryed process with procfs and llog threads by the import refcount and semaphore.

Severity: major
Description: mds fails to respond, threads stuck in ldlm_completion_ast
Details: Sort source/child resource pair after updating child resource.

Severity: major
Frequency: rare
Description: kernel BUG at ldiskfs2_ext_new_extent_cb
Details: If insertion of an extent fails, then discard the inode preallocation and free data blocks else it can lead to duplicate blocks.

Severity: normal
Description: don't always update ctime in ext3_xattr_set_handle()
Details: Current xattr code updates inode ctime in ext3_xattr_set_handle() In some cases the ctime should not be updated, for example for 2.0->1.8 compatibility it is necessary to delete an xattr and it should not update the ctime.

Severity: normal
Description: add quota statistics
Details: 1. sort out quota proc entries and proc code. 2. add quota statistics

Severity: normal
Frequency: often
Description: quotas are not honored with O_DIRECT
Details: all writes with the flag O_DIRECT will use grants which leads to this problem. Now using OBD_BRW_SYNC to guard this.

Severity: major
Frequency: rare
Description: Assertion in iopen_connect_dentry in 1.6.3
Details: looking up an inode via iopen with the wrong generation number can populate the dcache with a disconneced dentry while the inode number is in the process of being reallocated. This causes an assertion failure in iopen since the inode's dentry list contains both a connected and disconnected dentry.

Severity: normal
Description: assertion failure in ldlm_handle2lock()
Details: fix a race between class_handle_unhash() and class_handle2object() introduced in lustre 1.6.5 by bug 13622.

Severity: enhancement
Description: superblock lock contention with many SMP cores on one client
Details: several client filesystem locks were highly contended on SMP NUMA systems with 8 or more cores. Per-CPU datastructure and more efficient locking implemented to reduce contention.

Severity: minor
Frequency: rare
Description: Kernel BUG: sd_iostats_bump: unexpected disk index
Details: remove the limit of 256 scsi disks in the sd_iostat patch

Severity: minor
Frequency: rare
Description: oops in sd_iostats_seq_show()
Details: unloading/reloading the scsi low level driver triggers a kernel bug when trying to access the sd iostat file.

Severity: major
Frequency: rare
Description: Kernel panics during QLogic driver reload
Details: REQ_BLOCK_PC requests are not handled properly in the sd iostat patch, causing memory corruption.

Severity: minor
Frequency: rare
Description: journal_dev option does not work in b1_6
Details: pass mount option during pre-mount.

Severity: enhancement
Frequency:
Description: Add a FIEMAP(FIle Extent MAP) ioctl for ldiskfs
Details: FIEMAP ioctl will allow an application to efficiently fetch the extent information of a file. It can be used to map logical blocks in a file to physical blocks in the block device.

Severity: normal
Frequency: only with adaptive timeout enabled
Description: DEBUG_REQ() bad paging request
Details: ptlrpc_at_recv_early_reply() should not modify req->rq_repmsg because it can be accessed by reply_in_callback() without the rq_lock held.

Severity: normal
Frequency: only on Cray X2
Description: X2 build failures
Details: fix build failures on Cray X2.

Severity: normal
Description: xid & resent requests
Details: Initialize RPC XID from clock at startup (randomly if clock is bad).

Severity: major
Description: quota recovery deadlock during mds failover
Details: This patch includes att18982, att18236, att18237 in bz14840. Solve the problems: 1. fix osts hang when mds does failover with quotaon 2. prevent watchdog storm when osts threads wait for the recovery of mds

Severity: normal
Description: kernel panic on racer
Details: Do not access dchild->d_inode when IS_ERR(dchild) is true.

Severity: enhancement
Description: Add lustre_start utility to start or stop multiple Lustre servers from a CSV file.

Severity: major
Description: Lustre GPF in {:ptlrpc:ptlrpc_server_free_request+373}
Details: In case of memory pressure, list_del() can be called twice on req->rq_history_list, causing a kernel oops.

Severity: normal
Description: kptllnd_peer_check_sends()) ASSERTION(!in_interrupt()) failed
Details: fix stack overflow in the distributed lock manager by defering export eviction after a failed ast to the elt thread instead of handling it in the dlm interpret routine.

Severity: enhancement
Description: More exported tunables for mballoc
Details: Add support for tunable preallocation window and new tunables for large/small requests

Severity: normal
Description: Detect corruption of block bitmap and checking for preallocations
Details: Checks validity of on-disk block bitmap. Also it does better checking of number of applied preallocations. When corruption is found, it turns filesystem readonly to prevent further corruptions.

Severity: normal
Frequency: only for big-endian servers
Description: Check if big-endian system while mounting fs with extents feature
Details: Mounting a filesystem with extents feature will fail on big-endian systems since ext3-based ldiskfs is not supported on big-endian systems. Can be overridden with "bigendian_extents" mount option.

Severity: normal
Description: Excessive recovery window
Details: With AT enabled, the recovery window can be excessively long (6000+ seconds). To address this problem, we no longer use OBD_RECOVERY_FACTOR when extending the recovery window (the connect timeout no longer depends on the service time, it is set to INITIAL_CONNECT_TIMEOUT now) and clients report the old service time via pb_service_time.

Severity: normal
Description: Watchdog triggered on MDS failover
Details: enable OBD_CONNECT_MDT flag when connecting from the MDS so that the OSTs know that the MDS "UUID" can be reused for the same export from a different NID, so we do not need to wait for the export to be evicted.

Severity: enhancement
Description: Don't sync journal after every i/o
Details: Implement write RPC replay to allow server replies for write RPCs before data is on disk. However, this feature is disabled by default since some issues leading to data corruptions have been found during recovery (e.g. bug 19128). This feature can be enabled by running the following command on the OSSs: lctl set_param obdfilter.*.sync_journal=0

Severity: low
Description: Slow reads beyond 8Tb offsets.
Details: Page index integer overflow in ll_read_ahead_page

Severity: major
Frequency: rare, only if using MMP with Linux RAID
Description: MMP doesn't work with Linux RAID
Details: While using HA for Lustre servers with Linux RAID, it is possible that MMP will not detect multiple mounts. To make this work we need to unplug the device queue in RAID when the MMP block is being written. Also while reading the MMP block, we should read it from disk and not the cached one.

Severity: minor
Frequency: rare, during recovery
Description: Assertion failure in ldlm_lock_put
Details: Do not put cancelled locks into replay list, hold references on locks in replay list

Severity: critical
Description: Lustre detected file system corruption with inode out of bounds
Details: don't update i_size on MDS_CLOSE for directories. This causes directory corruptions on the MDT.

Severity: normal
Description: client doesn't try to reconnect
Details: correctly skip time estimate if in recovery

Personal tools
Navigation