Skip to content

Should the 1 billion row file be deterministic?Β #35

@datdenkikniet

Description

@datdenkikniet

Currently it seems that the 1 billion rows file is generated randomly. Making the generation pseudorandom would make sharing the 1 billion row file a little easier (since it should always be the same), and would make sure that everyone is running exactly the same test.

Just using a Random with a predefined seed to pick out stations, and seeding a Random with the hash code of the city name to obtain measurements should do the trick.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions