mirror of
https://codeberg.org/ziglang/zig.git
synced 2026-03-08 02:24:33 +01:00
-- On the standard library side: The `input: []const u8` parameter of functions passed to `testing.fuzz` has changed to `smith: *testing.Smith`. `Smith` is used to generate values from libfuzzer or input bytes generated by libfuzzer. `Smith` contains the following base methods: * `value` as a generic method for generating any type * `eos` for generating end-of-stream markers. Provides the additional guarantee `true` will eventually by provided. * `bytes` for filling a byte array. * `slice` for filling part of a buffer and providing the length. `Smith.Weight` is used for giving value ranges a higher probability of being selected. By default, every value has a weight of zero (i.e. they will not be selected). Weights can only apply to values that fit within a u64. The above functions have corresponding ones that accept weights. Additionally, the following functions are provided: * `baselineWeights` which provides a set of weights containing every possible value of a type. * `eosSimpleWeighted` for unique weights for `true` and `false` * `valueRangeAtMost` and `valueRangeLessThan` for weighing only a range of values. -- On the libfuzzer and abi side: --- Uids These are u32s which are used to classify requested values. This solves the problem of a mutation causing a new value to be requested and shifting all future values; for example: 1. An initial input contains the values 1, 2, 3 which are interpreted as a, b, and c respectively by the test. 2. The 1 is mutated to a 4 which causes the test to request an extra value interpreted as d. The input is now 4, 2, 3, 5 (new value) which the test corresponds to a, d, b, c; however, b and c no longer correspond to their original values. Uids contain a hash component and type component. The hash component is currently determined in `Smith` by taking a hash of the calling `@returnAddress()` or via an argument in the corresponding `WithHash` functions. The type component is used extensively in libfuzzer with its hashmaps. --- Mutations At the start of a cycle (a run), a random number of values to mutate is selected with less being exponentially more likely. The indexes of the values are selected from a selected uid with a logarithmic bias to uids with more values. Mutations may change a single values, several consecutive values in a uid, or several consecutive values in the uid-independent order they were requested. They may generate random values, mutate from previous ones, or copy from other values in the same uid from the same input or spliced from another. For integers, mutations from previous ones currently only generates random values. For bytes, mutations from previous mix new random data and previous bytes with a set number of mutations. --- Passive Minimization A different approach has been taken for minimizing inputs: instead of trying a fixed set of mutations when a fresh input is found, the input is instead simply added to the corpus and removed when it is no longer valuable. The quality of an input is measured based off how many unique pcs it hit and how many values it needed from the fuzzer. It is tracked which inputs hold the best qualities for each pc for hitting the minimum and maximum unique pcs while needing the least values. Once all an input's qualities have been superseded for the pcs it hit, it is removed from the corpus. -- Comparison to byte-based smith A byte-based smith would be much more inefficient and complex than this solution. It would be unable to solve the shifting problem that Uids do. It is unable to provide values from the fuzzer past end-of-stream. Even with feedback, it would be unable to act on dynamic weights which have proven essential with the updated tests (e.g. to constrain values to a range). -- Test updates All the standard library tests have been updated to use the new smith interface. For `Deque`, an ad hoc allocator was written to improve performance and remove reliance on heap allocation. `TokenSmith` has been added to aid in testing Ast and help inform decisions on the smith interface.
717 lines
26 KiB
Zig
717 lines
26 KiB
Zig
const std = @import("std");
|
|
const assert = std.debug.assert;
|
|
const Allocator = std.mem.Allocator;
|
|
|
|
/// A contiguous, growable, double-ended queue.
|
|
///
|
|
/// Pushing/popping items from either end of the queue is O(1).
|
|
pub fn Deque(comptime T: type) type {
|
|
return struct {
|
|
const Self = @This();
|
|
|
|
/// A ring buffer.
|
|
buffer: []T,
|
|
/// The index in buffer where the first item in the logical deque is stored.
|
|
head: usize,
|
|
/// The number of items stored in the logical deque.
|
|
len: usize,
|
|
|
|
/// A Deque containing no elements.
|
|
pub const empty: Self = .{
|
|
.buffer = &.{},
|
|
.head = 0,
|
|
.len = 0,
|
|
};
|
|
|
|
/// Initialize with capacity to hold `capacity` elements.
|
|
/// The resulting capacity will equal `capacity` exactly.
|
|
/// Deinitialize with `deinit`.
|
|
pub fn initCapacity(gpa: Allocator, capacity: usize) Allocator.Error!Self {
|
|
var deque: Self = .empty;
|
|
try deque.ensureTotalCapacityPrecise(gpa, capacity);
|
|
return deque;
|
|
}
|
|
|
|
/// Initialize with externally-managed memory. The buffer determines the
|
|
/// capacity and the deque is initially empty.
|
|
///
|
|
/// When initialized this way, all functions that accept an Allocator
|
|
/// argument cause illegal behavior.
|
|
pub fn initBuffer(buffer: []T) Self {
|
|
return .{
|
|
.buffer = buffer,
|
|
.head = 0,
|
|
.len = 0,
|
|
};
|
|
}
|
|
|
|
/// Release all allocated memory.
|
|
pub fn deinit(deque: *Self, gpa: Allocator) void {
|
|
gpa.free(deque.buffer);
|
|
deque.* = undefined;
|
|
}
|
|
|
|
/// Modify the deque so that it can hold at least `new_capacity` items.
|
|
/// Implements super-linear growth to achieve amortized O(1) push/pop operations.
|
|
/// Invalidates element pointers if additional memory is needed.
|
|
pub fn ensureTotalCapacity(deque: *Self, gpa: Allocator, new_capacity: usize) Allocator.Error!void {
|
|
if (deque.buffer.len >= new_capacity) return;
|
|
return deque.ensureTotalCapacityPrecise(gpa, std.ArrayList(T).growCapacity(new_capacity));
|
|
}
|
|
|
|
/// If the current capacity is less than `new_capacity`, this function will
|
|
/// modify the deque so that it can hold exactly `new_capacity` items.
|
|
/// Invalidates element pointers if additional memory is needed.
|
|
pub fn ensureTotalCapacityPrecise(deque: *Self, gpa: Allocator, new_capacity: usize) Allocator.Error!void {
|
|
if (deque.buffer.len >= new_capacity) return;
|
|
const old_buffer = deque.buffer;
|
|
if (gpa.remap(old_buffer, new_capacity)) |new_buffer| {
|
|
// If the items wrap around the end of the buffer we need to do
|
|
// a memcpy to prevent a gap after resizing the buffer.
|
|
if (deque.head > old_buffer.len - deque.len) {
|
|
// The gap splits the items in the deque into head and tail parts.
|
|
// Choose the shorter part to copy.
|
|
const head = new_buffer[deque.head..old_buffer.len];
|
|
const tail = new_buffer[0 .. deque.len - head.len];
|
|
if (head.len > tail.len and new_buffer.len - old_buffer.len > tail.len) {
|
|
@memcpy(new_buffer[old_buffer.len..][0..tail.len], tail);
|
|
} else {
|
|
// In this case overlap is possible if e.g. the capacity increase is 1
|
|
// and head.len is greater than 1.
|
|
deque.head = new_buffer.len - head.len;
|
|
@memmove(new_buffer[deque.head..][0..head.len], head);
|
|
}
|
|
}
|
|
deque.buffer = new_buffer;
|
|
} else {
|
|
const new_buffer = try gpa.alloc(T, new_capacity);
|
|
if (deque.head < old_buffer.len - deque.len) {
|
|
@memcpy(new_buffer[0..deque.len], old_buffer[deque.head..][0..deque.len]);
|
|
} else {
|
|
const head = old_buffer[deque.head..];
|
|
const tail = old_buffer[0 .. deque.len - head.len];
|
|
@memcpy(new_buffer[0..head.len], head);
|
|
@memcpy(new_buffer[head.len..][0..tail.len], tail);
|
|
}
|
|
deque.head = 0;
|
|
deque.buffer = new_buffer;
|
|
gpa.free(old_buffer);
|
|
}
|
|
}
|
|
|
|
/// Modify the deque so that it can hold at least `additional_count` **more** items.
|
|
/// Invalidates element pointers if additional memory is needed.
|
|
pub fn ensureUnusedCapacity(
|
|
deque: *Self,
|
|
gpa: Allocator,
|
|
additional_count: usize,
|
|
) Allocator.Error!void {
|
|
return deque.ensureTotalCapacity(gpa, try addOrOom(deque.len, additional_count));
|
|
}
|
|
|
|
/// Add one item to the front of the deque.
|
|
///
|
|
/// Invalidates element pointers if additional memory is needed.
|
|
pub fn pushFront(deque: *Self, gpa: Allocator, item: T) error{OutOfMemory}!void {
|
|
try deque.ensureUnusedCapacity(gpa, 1);
|
|
deque.pushFrontAssumeCapacity(item);
|
|
}
|
|
|
|
/// Add one item to the front of the deque.
|
|
///
|
|
/// Never invalidates element pointers.
|
|
///
|
|
/// If the deque lacks unused capacity for the additional item, returns
|
|
/// `error.OutOfMemory`.
|
|
pub fn pushFrontBounded(deque: *Self, item: T) error{OutOfMemory}!void {
|
|
if (deque.buffer.len - deque.len == 0) return error.OutOfMemory;
|
|
return deque.pushFrontAssumeCapacity(item);
|
|
}
|
|
|
|
/// Add one item to the front of the deque.
|
|
///
|
|
/// Never invalidates element pointers.
|
|
///
|
|
/// Asserts that the deque can hold one additional item.
|
|
pub fn pushFrontAssumeCapacity(deque: *Self, item: T) void {
|
|
assert(deque.len < deque.buffer.len);
|
|
if (deque.head == 0) {
|
|
deque.head = deque.buffer.len;
|
|
}
|
|
deque.head -= 1;
|
|
deque.buffer[deque.head] = item;
|
|
deque.len += 1;
|
|
}
|
|
|
|
/// Add one item to the back of the deque.
|
|
///
|
|
/// Invalidates element pointers if additional memory is needed.
|
|
pub fn pushBack(deque: *Self, gpa: Allocator, item: T) error{OutOfMemory}!void {
|
|
try deque.ensureUnusedCapacity(gpa, 1);
|
|
deque.pushBackAssumeCapacity(item);
|
|
}
|
|
|
|
/// Add one item to the back of the deque.
|
|
///
|
|
/// Never invalidates element pointers.
|
|
///
|
|
/// If the deque lacks unused capacity for the additional item, returns
|
|
/// `error.OutOfMemory`.
|
|
pub fn pushBackBounded(deque: *Self, item: T) error{OutOfMemory}!void {
|
|
if (deque.buffer.len - deque.len == 0) return error.OutOfMemory;
|
|
deque.pushBackAssumeCapacity(item);
|
|
}
|
|
|
|
/// Add one item to the back of the deque.
|
|
///
|
|
/// Never invalidates element pointers.
|
|
///
|
|
/// Asserts that the deque can hold one additional item.
|
|
pub fn pushBackAssumeCapacity(deque: *Self, item: T) void {
|
|
assert(deque.len < deque.buffer.len);
|
|
const buffer_index = deque.bufferIndex(deque.len);
|
|
deque.buffer[buffer_index] = item;
|
|
deque.len += 1;
|
|
}
|
|
|
|
/// Add `items` to the front of the deque.
|
|
/// This is equivalent to iterating `items` in reverse and calling
|
|
/// `pushFront` on every single entry.
|
|
///
|
|
/// Invalidates element pointers if additional memory is needed.
|
|
pub fn pushFrontSlice(deque: *Self, gpa: Allocator, items: []const T) error{OutOfMemory}!void {
|
|
try deque.ensureUnusedCapacity(gpa, items.len);
|
|
return deque.pushFrontSliceAssumeCapacity(items);
|
|
}
|
|
|
|
/// Add `items` to the front of the deque.
|
|
/// This is equivalent to iterating `items` in reverse and calling
|
|
/// `pushFront` on every single entry.
|
|
///
|
|
/// Never invalidates element pointers.
|
|
///
|
|
/// If the deque lacks unused capacity for the additional items, returns
|
|
/// `error.OutOfMemory`.
|
|
pub fn pushFrontSliceBounded(deque: *Self, items: []const T) error{OutOfMemory}!void {
|
|
if (deque.buffer.len - deque.len < items.len) return error.OutOfMemory;
|
|
return deque.pushFrontSliceAssumeCapacity(items);
|
|
}
|
|
|
|
/// Add `items` to the front of the deque.
|
|
/// This is equivalent to iterating `items` in reverse and calling
|
|
/// `pushFront` on every single entry.
|
|
///
|
|
/// Never invalidates element pointers.
|
|
///
|
|
/// Asserts that the deque can hold the additional items.
|
|
pub fn pushFrontSliceAssumeCapacity(deque: *Self, items: []const T) void {
|
|
assert(deque.buffer.len - deque.len >= items.len);
|
|
if (deque.head < items.len) {
|
|
@memcpy(deque.buffer[0..deque.head], items[items.len - deque.head ..]);
|
|
deque.head = deque.buffer.len - items.len + deque.head;
|
|
@memcpy(deque.buffer[deque.head..], items.ptr);
|
|
} else {
|
|
deque.head -= items.len;
|
|
@memcpy(deque.buffer[deque.head..][0..items.len], items);
|
|
}
|
|
deque.len += items.len;
|
|
}
|
|
|
|
/// Add `items` to the back of the deque.
|
|
/// This is equivalent to iterating `items` in order and calling
|
|
/// `pushBack` on every single entry.
|
|
///
|
|
/// Invalidates element pointers if additional memory is needed.
|
|
pub fn pushBackSlice(deque: *Self, gpa: Allocator, items: []const T) error{OutOfMemory}!void {
|
|
try deque.ensureUnusedCapacity(gpa, items.len);
|
|
return deque.pushBackSliceAssumeCapacity(items);
|
|
}
|
|
|
|
/// Add `items` to the back of the deque.
|
|
/// This is equivalent to iterating `items` in order and calling
|
|
/// `pushBack` on every single entry.
|
|
///
|
|
/// Never invalidates element pointers.
|
|
///
|
|
/// If the deque lacks unused capacity for the additional items, returns
|
|
/// `error.OutOfMemory`.
|
|
pub fn pushBackSliceBounded(deque: *Self, items: []const T) error{OutOfMemory}!void {
|
|
if (deque.buffer.len - deque.len < items.len) return error.OutOfMemory;
|
|
return deque.pushBackSliceAssumeCapacity(items);
|
|
}
|
|
|
|
/// Add `items` to the back of the deque.
|
|
/// This is equivalent to iterating `items` in order and calling
|
|
/// `pushBack` on every single entry.
|
|
///
|
|
/// Never invalidates element pointers.
|
|
///
|
|
/// Asserts that the deque can hold the additional items.
|
|
pub fn pushBackSliceAssumeCapacity(deque: *Self, items: []const T) void {
|
|
assert(deque.buffer.len - deque.len >= items.len);
|
|
const trailing_buffer = deque.buffer[deque.bufferIndex(deque.len)..];
|
|
if (trailing_buffer.len < items.len) {
|
|
@memcpy(trailing_buffer, items[0..trailing_buffer.len]);
|
|
@memcpy(deque.buffer.ptr, items[trailing_buffer.len..]);
|
|
} else {
|
|
@memcpy(trailing_buffer[0..items.len], items);
|
|
}
|
|
deque.len += items.len;
|
|
}
|
|
|
|
/// Return the first item in the deque or null if empty.
|
|
pub fn front(deque: *const Self) ?T {
|
|
if (deque.len == 0) return null;
|
|
return deque.buffer[deque.head];
|
|
}
|
|
|
|
/// Return pointer to the first item in the deque or null if empty.
|
|
pub fn frontPtr(deque: *const Self) ?*T {
|
|
if (deque.len == 0) return null;
|
|
return &deque.buffer[deque.head];
|
|
}
|
|
|
|
/// Return the last item in the deque or null if empty.
|
|
pub fn back(deque: *const Self) ?T {
|
|
if (deque.len == 0) return null;
|
|
return deque.buffer[deque.bufferIndex(deque.len - 1)];
|
|
}
|
|
|
|
/// Return the last item in the deque or null if empty.
|
|
pub fn backPtr(deque: *const Self) ?*T {
|
|
if (deque.len == 0) return null;
|
|
return &deque.buffer[deque.bufferIndex(deque.len - 1)];
|
|
}
|
|
|
|
/// Return the item at the given index in the deque.
|
|
///
|
|
/// The first item in the queue is at index 0.
|
|
///
|
|
/// Asserts that the index is in-bounds.
|
|
pub fn at(deque: *const Self, index: usize) T {
|
|
assert(index < deque.len);
|
|
return deque.buffer[deque.bufferIndex(index)];
|
|
}
|
|
|
|
/// Return pointer to the item at the given index in the deque.
|
|
///
|
|
/// The first item in the queue is at index 0.
|
|
///
|
|
/// Asserts that the index is in-bounds.
|
|
pub fn atPtr(deque: *const Self, index: usize) *T {
|
|
assert(index < deque.len);
|
|
return &deque.buffer[deque.bufferIndex(index)];
|
|
}
|
|
|
|
/// Remove and return the first item in the deque or null if empty.
|
|
pub fn popFront(deque: *Self) ?T {
|
|
if (deque.len == 0) return null;
|
|
const pop_index = deque.head;
|
|
deque.head = deque.bufferIndex(1);
|
|
deque.len -= 1;
|
|
return deque.buffer[pop_index];
|
|
}
|
|
|
|
/// Remove and return the last item in the deque or null if empty.
|
|
pub fn popBack(deque: *Self) ?T {
|
|
if (deque.len == 0) return null;
|
|
deque.len -= 1;
|
|
return deque.buffer[deque.bufferIndex(deque.len)];
|
|
}
|
|
|
|
pub const Iterator = struct {
|
|
deque: *const Self,
|
|
index: usize,
|
|
|
|
pub fn peek(it: Iterator) ?T {
|
|
if (it.index >= it.deque.len) return null;
|
|
return it.deque.at(it.index);
|
|
}
|
|
pub fn next(it: *Iterator) ?T {
|
|
const item = it.peek() orelse return null;
|
|
it.index += 1;
|
|
return item;
|
|
}
|
|
|
|
pub fn peekPtr(it: Iterator) ?*T {
|
|
if (it.index >= it.deque.len) return null;
|
|
return it.deque.atPtr(it.index);
|
|
}
|
|
pub fn nextPtr(it: *Iterator) ?*T {
|
|
const item_ptr = it.peekPtr() orelse return null;
|
|
it.index += 1;
|
|
return item_ptr;
|
|
}
|
|
};
|
|
|
|
/// Iterates over all items in the deque in order from front to back.
|
|
pub fn iterator(deque: *const Self) Iterator {
|
|
return .{ .deque = deque, .index = 0 };
|
|
}
|
|
|
|
/// Returns the index in `buffer` where the element at the given
|
|
/// index in the logical deque is stored.
|
|
fn bufferIndex(deque: *const Self, index: usize) usize {
|
|
// This function is written in this way to avoid overflow and
|
|
// expensive division.
|
|
const head_len = deque.buffer.len - deque.head;
|
|
if (index < head_len) {
|
|
return deque.head + index;
|
|
} else {
|
|
return index - head_len;
|
|
}
|
|
}
|
|
};
|
|
}
|
|
|
|
/// Integer addition returning `error.OutOfMemory` on overflow.
|
|
fn addOrOom(a: usize, b: usize) error{OutOfMemory}!usize {
|
|
const result, const overflow = @addWithOverflow(a, b);
|
|
if (overflow != 0) return error.OutOfMemory;
|
|
return result;
|
|
}
|
|
|
|
test "basic" {
|
|
const testing = std.testing;
|
|
const gpa = testing.allocator;
|
|
|
|
var q: Deque(u32) = .empty;
|
|
defer q.deinit(gpa);
|
|
|
|
try testing.expectEqual(null, q.popFront());
|
|
try testing.expectEqual(null, q.popBack());
|
|
|
|
try q.pushBack(gpa, 1);
|
|
try q.pushBack(gpa, 2);
|
|
try q.pushBack(gpa, 3);
|
|
try q.pushFront(gpa, 0);
|
|
|
|
try testing.expectEqual(0, q.popFront());
|
|
try testing.expectEqual(1, q.popFront());
|
|
try testing.expectEqual(3, q.popBack());
|
|
try testing.expectEqual(2, q.popFront());
|
|
try testing.expectEqual(null, q.popFront());
|
|
try testing.expectEqual(null, q.popBack());
|
|
}
|
|
|
|
test "buffer" {
|
|
const testing = std.testing;
|
|
|
|
var buffer: [4]u32 = undefined;
|
|
var q: Deque(u32) = .initBuffer(&buffer);
|
|
|
|
try testing.expectEqual(null, q.popFront());
|
|
try testing.expectEqual(null, q.popBack());
|
|
|
|
try q.pushBackBounded(1);
|
|
try q.pushBackBounded(2);
|
|
try q.pushBackBounded(3);
|
|
try q.pushFrontBounded(0);
|
|
try testing.expectError(error.OutOfMemory, q.pushBackBounded(4));
|
|
|
|
try testing.expectEqual(0, q.popFront());
|
|
try testing.expectEqual(1, q.popFront());
|
|
try testing.expectEqual(3, q.popBack());
|
|
try testing.expectEqual(2, q.popFront());
|
|
try testing.expectEqual(null, q.popFront());
|
|
try testing.expectEqual(null, q.popBack());
|
|
}
|
|
|
|
test "slow growth" {
|
|
const testing = std.testing;
|
|
const gpa = testing.allocator;
|
|
|
|
var q: Deque(i32) = .empty;
|
|
defer q.deinit(gpa);
|
|
|
|
try q.ensureTotalCapacityPrecise(gpa, 1);
|
|
q.pushBackAssumeCapacity(1);
|
|
try q.ensureTotalCapacityPrecise(gpa, 2);
|
|
q.pushFrontAssumeCapacity(0);
|
|
try q.ensureTotalCapacityPrecise(gpa, 3);
|
|
q.pushBackAssumeCapacity(2);
|
|
try q.ensureTotalCapacityPrecise(gpa, 5);
|
|
q.pushBackAssumeCapacity(3);
|
|
q.pushFrontAssumeCapacity(-1);
|
|
try q.ensureTotalCapacityPrecise(gpa, 6);
|
|
q.pushFrontAssumeCapacity(-2);
|
|
|
|
try testing.expectEqual(-2, q.popFront());
|
|
try testing.expectEqual(-1, q.popFront());
|
|
try testing.expectEqual(3, q.popBack());
|
|
try testing.expectEqual(0, q.popFront());
|
|
try testing.expectEqual(2, q.popBack());
|
|
try testing.expectEqual(1, q.popBack());
|
|
try testing.expectEqual(null, q.popFront());
|
|
try testing.expectEqual(null, q.popBack());
|
|
}
|
|
|
|
test "slice" {
|
|
const testing = std.testing;
|
|
const gpa = testing.allocator;
|
|
|
|
var q: Deque(i32) = .empty;
|
|
defer q.deinit(gpa);
|
|
|
|
try q.pushBackSlice(gpa, &.{ 3, 4, 5 });
|
|
try q.pushBackSlice(gpa, &.{ 6, 7 });
|
|
try q.pushFrontSlice(gpa, &.{2});
|
|
try q.pushBackSlice(gpa, &.{});
|
|
try q.pushFrontSlice(gpa, &.{ 0, 1 });
|
|
try q.pushFrontSlice(gpa, &.{});
|
|
|
|
try testing.expectEqual(0, q.popFront());
|
|
try testing.expectEqual(1, q.popFront());
|
|
try testing.expectEqual(7, q.popBack());
|
|
try testing.expectEqual(6, q.popBack());
|
|
|
|
try q.pushFrontSlice(gpa, &.{ 0, 1 });
|
|
try q.pushBackSlice(gpa, &.{ 6, 7 });
|
|
|
|
try testing.expectEqual(0, q.popFront());
|
|
try testing.expectEqual(1, q.popFront());
|
|
try testing.expectEqual(2, q.popFront());
|
|
try testing.expectEqual(7, q.popBack());
|
|
try testing.expectEqual(6, q.popBack());
|
|
try testing.expectEqual(3, q.popFront());
|
|
try testing.expectEqual(4, q.popFront());
|
|
try testing.expectEqual(5, q.popBack());
|
|
try testing.expectEqual(null, q.popFront());
|
|
try testing.expectEqual(null, q.popBack());
|
|
}
|
|
|
|
test "iterator" {
|
|
const testing = std.testing;
|
|
const gpa = testing.allocator;
|
|
|
|
var q: Deque(i32) = .empty;
|
|
defer q.deinit(gpa);
|
|
|
|
const items: []const i32 = &.{ 0, 1, 2, 3, 4, 5 };
|
|
try q.pushFrontSlice(gpa, items);
|
|
|
|
{
|
|
var it = q.iterator();
|
|
for (items) |item| {
|
|
try testing.expectEqual(item, it.peek());
|
|
try testing.expectEqual(item, it.next());
|
|
}
|
|
try testing.expectEqual(null, it.peek());
|
|
try testing.expectEqual(null, it.next());
|
|
}
|
|
{
|
|
var it = q.iterator();
|
|
for (items) |item| {
|
|
if (it.peekPtr()) |ptr| {
|
|
try testing.expectEqual(item, ptr.*);
|
|
} else return error.TestExpectedNonNull;
|
|
if (it.nextPtr()) |ptr| {
|
|
try testing.expectEqual(item, ptr.*);
|
|
} else return error.TestExpectedNonNull;
|
|
}
|
|
try testing.expectEqual(null, it.peekPtr());
|
|
try testing.expectEqual(null, it.nextPtr());
|
|
}
|
|
}
|
|
|
|
test "fuzz against ArrayList oracle" {
|
|
try std.testing.fuzz({}, fuzzAgainstArrayList, .{});
|
|
}
|
|
|
|
const FuzzAllocator = struct {
|
|
smith: *std.testing.Smith,
|
|
bufs: [2][256 * 4]u8 align(4),
|
|
used_bitmap: u2,
|
|
used_len: [2]usize,
|
|
|
|
pub fn init(smith: *std.testing.Smith) FuzzAllocator {
|
|
return .{
|
|
.smith = smith,
|
|
.bufs = undefined,
|
|
.used_len = undefined,
|
|
.used_bitmap = 0,
|
|
};
|
|
}
|
|
|
|
pub fn allocator(f: *FuzzAllocator) std.mem.Allocator {
|
|
return .{
|
|
.ptr = f,
|
|
.vtable = &.{
|
|
.alloc = alloc,
|
|
.resize = resize,
|
|
.remap = remap,
|
|
.free = free,
|
|
},
|
|
};
|
|
}
|
|
|
|
pub fn allocCount(f: *FuzzAllocator) u2 {
|
|
return @popCount(f.used_bitmap);
|
|
}
|
|
|
|
fn alloc(ctx: *anyopaque, len: usize, a: std.mem.Alignment, _: usize) ?[*]u8 {
|
|
const f: *FuzzAllocator = @ptrCast(@alignCast(ctx));
|
|
assert(a == .@"4");
|
|
assert(len % 4 == 0);
|
|
|
|
const slot: u1 = @intCast(@ctz(~f.used_bitmap));
|
|
const buf: []u8 = &f.bufs[slot];
|
|
if (len > buf.len) return null;
|
|
f.used_bitmap |= @as(u2, 1) << slot;
|
|
f.used_len[slot] = len;
|
|
return buf.ptr;
|
|
}
|
|
|
|
fn memSlot(f: *FuzzAllocator, mem: []u8) u1 {
|
|
const slot: u1 = if (&mem[0] == &f.bufs[0][0])
|
|
0
|
|
else if (&mem[0] == &f.bufs[1][0])
|
|
1
|
|
else
|
|
unreachable;
|
|
assert((f.used_bitmap >> slot) & 1 == 1);
|
|
assert(mem.len == f.used_len[slot]);
|
|
return slot;
|
|
}
|
|
|
|
fn resize(ctx: *anyopaque, mem: []u8, a: std.mem.Alignment, new_len: usize, _: usize) bool {
|
|
const f: *FuzzAllocator = @ptrCast(@alignCast(ctx));
|
|
assert(a == .@"4");
|
|
assert(f.allocCount() == 1);
|
|
|
|
const slot = f.memSlot(mem);
|
|
if (new_len > f.bufs[slot].len or f.smith.value(bool)) return false;
|
|
f.used_len[slot] = new_len;
|
|
return true;
|
|
}
|
|
|
|
fn remap(ctx: *anyopaque, mem: []u8, a: std.mem.Alignment, new_len: usize, _: usize) ?[*]u8 {
|
|
const f: *FuzzAllocator = @ptrCast(@alignCast(ctx));
|
|
assert(a == .@"4");
|
|
assert(f.allocCount() == 1);
|
|
|
|
const slot = f.memSlot(mem);
|
|
if (new_len > f.bufs[slot].len or f.smith.value(bool)) return null;
|
|
|
|
if (f.smith.value(bool)) {
|
|
f.used_len[slot] = new_len;
|
|
// remap in place
|
|
return mem.ptr;
|
|
} else {
|
|
// moving remap
|
|
const new_slot = ~slot;
|
|
f.used_bitmap = ~f.used_bitmap;
|
|
f.used_len[new_slot] = new_len;
|
|
|
|
const new_buf = &f.bufs[new_slot];
|
|
@memcpy(new_buf[0..mem.len], mem);
|
|
return new_buf.ptr;
|
|
}
|
|
}
|
|
|
|
fn free(ctx: *anyopaque, mem: []u8, a: std.mem.Alignment, _: usize) void {
|
|
const f: *FuzzAllocator = @ptrCast(@alignCast(ctx));
|
|
assert(a == .@"4");
|
|
f.used_bitmap ^= @as(u2, 1) << f.memSlot(mem);
|
|
}
|
|
};
|
|
|
|
fn fuzzAgainstArrayList(_: void, smith: *std.testing.Smith) anyerror!void {
|
|
const testing = std.testing;
|
|
|
|
var q_gpa_inst: FuzzAllocator = .init(smith);
|
|
var l_gpa_buf: [q_gpa_inst.bufs[0].len]u8 align(4) = undefined;
|
|
var l_gpa_inst: std.heap.FixedBufferAllocator = .init(&l_gpa_buf);
|
|
const q_gpa = q_gpa_inst.allocator();
|
|
const l_gpa = l_gpa_inst.allocator();
|
|
|
|
var q: Deque(u32) = .empty;
|
|
var l: std.ArrayList(u32) = .empty;
|
|
|
|
const Action = enum(u8) {
|
|
grow,
|
|
push_back,
|
|
push_front,
|
|
push_back_slice,
|
|
push_front_slice,
|
|
pop_back,
|
|
pop_front,
|
|
};
|
|
|
|
while (!smith.eosWeightedSimple(15, 1)) {
|
|
const baseline = testing.Smith.baselineWeights(Action);
|
|
const grow_weight: testing.Smith.Weight = .value(Action, .grow, 3);
|
|
switch (smith.valueWeighted(Action, baseline ++ .{grow_weight})) {
|
|
.push_back => {
|
|
const item = smith.value(u32);
|
|
try testing.expectEqual(
|
|
l.appendBounded(item),
|
|
q.pushBackBounded(item),
|
|
);
|
|
},
|
|
.push_front => {
|
|
const item = smith.value(u32);
|
|
try testing.expectEqual(
|
|
l.insertBounded(0, item),
|
|
q.pushFrontBounded(item),
|
|
);
|
|
},
|
|
.push_back_slice => {
|
|
var buffer: [std.math.maxInt(u3)]u32 = undefined;
|
|
const items = buffer[0..smith.value(u3)];
|
|
for (items) |*item| {
|
|
item.* = smith.value(u32);
|
|
}
|
|
try testing.expectEqual(
|
|
l.appendSliceBounded(items),
|
|
q.pushBackSliceBounded(items),
|
|
);
|
|
},
|
|
.push_front_slice => {
|
|
var buffer: [std.math.maxInt(u3)]u32 = undefined;
|
|
const items = buffer[0..smith.value(u3)];
|
|
for (items) |*item| {
|
|
item.* = smith.value(u32);
|
|
}
|
|
try testing.expectEqual(
|
|
l.insertSliceBounded(0, items),
|
|
q.pushFrontSliceBounded(items),
|
|
);
|
|
},
|
|
.pop_back => {
|
|
try testing.expectEqual(l.pop(), q.popBack());
|
|
},
|
|
.pop_front => {
|
|
try testing.expectEqual(
|
|
if (l.items.len > 0) l.orderedRemove(0) else null,
|
|
q.popFront(),
|
|
);
|
|
},
|
|
// Growing by small, random, linear amounts seems to better test
|
|
// ensureTotalCapacityPrecise(), which is the most complex part
|
|
// of the Deque implementation.
|
|
.grow => {
|
|
const growth = smith.value(u3);
|
|
try l.ensureTotalCapacityPrecise(l_gpa, l.items.len + growth);
|
|
try q.ensureTotalCapacityPrecise(q_gpa, q.len + growth);
|
|
},
|
|
}
|
|
try testing.expectEqual(l.getLastOrNull(), q.back());
|
|
try testing.expectEqual(
|
|
if (l.items.len > 0) l.items[0] else null,
|
|
q.front(),
|
|
);
|
|
try testing.expectEqual(l.items.len, q.len);
|
|
try testing.expectEqual(l.capacity, q.buffer.len);
|
|
{
|
|
var it = q.iterator();
|
|
for (l.items) |item| {
|
|
try testing.expectEqual(item, it.next());
|
|
}
|
|
try testing.expectEqual(null, it.next());
|
|
}
|
|
try testing.expectEqual(@intFromBool(q.buffer.len != 0), q_gpa_inst.allocCount());
|
|
}
|
|
q.deinit(q_gpa);
|
|
try testing.expectEqual(0, q_gpa_inst.allocCount());
|
|
}
|