Category Archives: Uncategorized

Robot Framework by Examples

Introduction

Robot Framework (RF) is a popular keyword driven test framework (at least in Finland it seems to be..). Recently had to look into it again for some potential work related opportunities. Have to say open source is great but the docs could use improvements..

I made a few examples for the next time I come looking:

Installing

To install RF itself, in Python pip does the job. Installing RF itself, along with Selenium keywords, and Selenium Webdriver for those keywords:

pip3 install robotframework
pip3 install selenium
pip3 install robotframework-seleniumlibrary

Using Selenium WebDriver as an example here, a Selenium driver for the selected browser is needed. For Chrome, one can be downloaded from the Chrome website itself. Similarly for other browsers on their respective sites. The installed driver needs to be on the search path for the operating system. On macOS, this is as simple as adding it to the path. Assuming the driver is in currect directory:

PATH=.:$PATH

So just the dot, which works as long as the driver file is in the working directory when running the tests.

In PyCharm, the PATH can also be similarly added to run configuration environment variables.

General RF Script Structure

RF script elements are separated by minimum of 2 space indentation. Both indenting test steps under a test, and also to separate keywords and parameters. There is also the pipe separated format which might look a bit fancier, if you like. Sections are identified by three stars *** and a pre-defined name for the section.

The following examples illustrate.

Examples

Built-in Keywords / Logging to console

The built-in keywords are avaiable without needing to import a specific library. Rather they are part of the built-in library. Simple example of logging some statement to console:

The .robot script (hello.robot in this case):

*** Test Cases ***
Say Hello
    Log To Console    Hello Donkey
    No Operation
    Comment           Hello ${bob}

The built-in keyword "Log To Console" writes the given parameter to the log file. A hello world equivalent. To run the test, we can either write code to invoke the RF runner from Python or use RF command line tools. Python runner example:

from robot.running import TestSuiteBuilder
from robot.api import ResultWriter

#https://robot-framework.readthedocs.io/en/3.0/autodoc/robot.running.html
suite = TestSuiteBuilder().build("./hello.robot")
result = suite.run(output="test_output.xml")
#ResultWriter(result).write_results(report='report.html', log="log.html")
ResultWriter("test_output.xml").write_results(report='report.html', log="log.html")

The "hello.robot" in above is the name of the test script file listed above also.

The strangest thing (for me) here is the writing of the log file. The docs suggest to use the first approach I commented out above. The ResultWriter with the results object as a parameter. This generates the report.html and the log.html.

The problem is, the log.html is lacking all the prints, keywords, and test execution logs. Later on the same docs state that to get the actual logs, you have to pass in the name of the XML file that was created by the suite.run() method. This is the uncommented approach in the above code. Since the results object is also generated from this call, why does it not give the proper log? Oh dear. I don’t understand.

Commandline runner example:

robot hello.robot

This seems to automatically generate an appropriate log file (including execution and keyword trace). There are also a number of command line options available, for all the properties I discuss next using the Python API. Maybe the general / preferred approach? But somehow I always end up needing to do my own executors to customize and integrate with everything, so..

Finally on logging, Robot Framework actually captures the whole stdout and stderr, so statements like print() get written to the RF log and not to actual console. I found this to be quite annoying and resulting in overly verbose logs with all the RF boilerplate/overhead. There is a StackOverflow answer on how to circumvent this though, from the RF author himself. I guess I could likely write my own keyword to use that if needed to get more log customization, but seems a bit complicated.

Tags and Critical Tests

RF tags are something that can be used to filter and group tests. One use is to define some tests as "critical". If a critical test fails, the suite is considered failed.

Example of non-critical test filtering. First, defining two tests:

*** Test Cases ***
Say Hello Critical
	[Tags]            crit
    Log To Console    Hello Critical Donkey
    No Operation
    Comment           Hello ${bob}

Say Hello Non-Critical
	[Tags]            non-crit
    Log To Console    Hello Nice Donkey
    No Operation
    Comment           Hello ${bob}

Running them, while filtering with wildcard:

from robot.running import TestSuiteBuilder
from robot.api import ResultWriter

#https://robot-framework.readthedocs.io/en/3.0/autodoc/robot.running.html
suite = TestSuiteBuilder().build("./noncritical.robot")
result = suite.run(output="test_output.xml", noncritical="*crit")
ResultWriter("test_output.xml").write_results(report='report.html', log="log.html")

The above classifies all tests that have tags matching the regexp "*crit" as non-critical. In this case, it includes both the tags "crit" and "non-crit", which would likely be a bit wrong. So the report for this actually shows 2 non-critical tests.

The same execution with a non-existent non-critical tag:

from robot.running import TestSuiteBuilder
from robot.api import ResultWriter

#https://robot-framework.readthedocs.io/en/3.0/autodoc/robot.running.html
suite = TestSuiteBuilder().build("./noncritical.robot")
#this tag does not exist in the given suite, so no critical tests should be listed in report
result = suite.run(noncritical="non")
ResultWriter(result).write_results(report='report.html', log="log.html")

This runs all tests as critical, since no test has a tag of "non". To finally fix it, the filter should be exactly "non-crit". This would not match "crit" but would match exactly "non-crit".

Filtering / Selecting Tests

There are also keywords include and exclude. To include or exclude (surprise) tests with matching tags from execution.

A couple of tests with two different tags (as before):

*** Test Cases ***
Say Hello Critical
	[Tags]            crit
    Log To Console    Hello Critical Donkey
    No Operation
    Comment           Hello ${bob}

Say Hello Non-Critical
	[Tags]            non-crit
    Log To Console    Hello Nice Donkey
    No Operation
    Comment           Hello ${bob}

Run tests, include with wildcard:

from robot.running import TestSuiteBuilder
from robot.api import ResultWriter
from io import StringIO

#https://robot-framework.readthedocs.io/en/3.0/autodoc/robot.running.html
suite = TestSuiteBuilder().build("./include.robot")
stdout = StringIO()
result = suite.run(include="*crit", stdout=stdout)
ResultWriter(result).write_results(report='report.html', log="log.html")
output = stdout.getvalue()
print(output)

This includes both of the two tests defined above, since the tags match. If the filter was "non", nothing would match, and error is produced for no tests to run.

Creating new Keywords from Existing Keywords

Besides somebody elses keywords, custom keywords can be extended from existing keywords. Example test file:

*** Settings ***
Resource    simple_keywords.robot

*** Test Cases ***
Run A Google Search
    Search for      chrome    emoji wars
    Sleep           10s
    Close All Browsers

The included (by the Resource keyword above) file simple_keywords.robot:

*** Settings ***
Library  SeleniumLibrary

*** Keywords ***
Search for
    [Arguments]    ${browser_type}    ${search_string}
    Open browser    http://google.com/   ${browser_type}
    Press Keys      name:q    ${search_string}+ENTER

So the keyword is defined above in a separate file, with arguments defined using the [Arguments] notation. Followed by the argument names. Which are then referenced in following keywords, Open Browser and Press Keys, imported from SeleniumLibrary. Simple enough.

Selenium Basics on RF

Due to popularity of Selenium Webdriver and testing of web applications, there is a specific RF library with keywords built for it. This was installed way up in Installing section.

Basic example:

*** Settings ***
Library  SeleniumLibrary

*** Test Cases ***
Run A Google Search
    Open browser    http://google.com/   Chrome
    Press Keys      name:q    emoji wars+ENTER
    Sleep           10s
    Close All Browsers

Run it as always before:

from robot.running import TestSuiteBuilder
import robot

#https://robot-framework.readthedocs.io/en/3.0/autodoc/robot.running.html
suite = TestSuiteBuilder().build("./google_search.robot")
result = suite.run()

This should open up Chrome browser, load Google on it, do a basic search, and close the browser windows. Assuming it founds the Chrome driver also listed in the Installing section.

Creating New Keywords in Python

Besides building keywords as composites of existing ones, building new ones with Python code is an option.

Example test file:

*** Settings ***
Library         google_search_lib.py    chrome

*** Test Cases ***
Run A Google Search
    Search for      emoji wars
    Sleep           10s
    Close

The above references google_search_lib.py, where the implementation is:

from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium import webdriver
from selenium.webdriver.common.keys import Keys

class google_search_lib(object):
    driver = None

    @classmethod
    def get_driver(cls, browser):
        if cls.driver is not None:
            return cls.driver
        if (browser.lower()) == "chrome":
            cls.driver = webdriver.Chrome("../chromedriver")
        return cls.driver

    def __init__(self, browser):
        print("creating..")
        driver = google_search_lib.get_driver(browser)
        self.driver = driver
        self.wait = WebDriverWait(driver, 10)

    def search_for(self, term):
        print("open")
        self.driver.get("http://google.com/")
        search_box = self.driver.find_element_by_name("q")
        search_box.send_keys(term)
        search_box.send_keys(Keys.RETURN)

    def close(self):
        self.driver.quit()

Defining the library import names is a bit tricky. If it is the same in both cases (module + class) just one is needed.

Again, running it as before:

from robot.running import TestSuiteBuilder

suite = TestSuiteBuilder().build("./google_search.robot")
result = suite.run()

If you think about this for a moment, there is some strange magic here. Why is the classmethod there? How is state managed within tests / suites? I borrowed the initial code for this example from this fine tutorial. It does not discuss the use of this annotation, but it seems to me that this is used to shared the driver object during test execution.

Mapping Python Functions to Keywords

It is simply by taking the function name and underscores for space. So in the above google_search_lib.py example, the Search For maps to the search_for() function. Close keyword maps to close() function. Much complex, eh?

Test Setup and Teardown

Test setup and teardown are some basic functionality. This is supported in RF by specific keywords in the Settings section.

Example test file:

*** Settings ***
Library         google_search_lib.py    chrome
Test Setup      Log To Console    Starting a test...
Test Teardown   Close

*** Test Cases ***
Run A Google Search
    Search for      emoji wars
    Sleep           10s

The referenced google_search_lib.py file is the same as above. This includes defining the close function / keyword used in Test Teardown.

Run it as usual:

from robot.running import TestSuiteBuilder

suite = TestSuiteBuilder().build("./google_search.robot")
result = suite.run()

You can define a single keyword for both setup and teardown. RF docs suggest to write your own custom keyword, composing multiple actions as needed.

The way the library class is defined and created is also impacted on how the scope of the library is defined. It seems to get a bit tricky to manage the resources, since sometimes the instances are different in the setup, teardown, tests, or in all tests. I think this is one of the reasons for using the classmethod annotation in the tutorial example I cited.

There would be much more such as variables in tests. And RF also supports the BDD (Gherkin) syntax in addition to the keywords I showed here. But the underlying framework is quite the same in both cases.

Anyway, that’s all I am writing on today. I find RF is quite straightforward once you get the idea, and not too complex to use even with the docs not being so straightforward. Overall, a very simple concept, and I guess one that the author(s) have managed to build a reasonable community around. Which I guess is what makes it useful and potentially successfull.

I personally prefer writing software over putting keywords after one another, but for writing tests I guess this is one useful method. And maybe there is an art in itself to writing good, suitably abstracted, reusable yet concrete keywords?

That’s all, folks,…

My Pandas Cheat Sheet

https://stackoverflow.com/questions/36794433/python-using-multiprocessing-on-a-pandas-dataframe http://www.racketracer.com/2016/07/06/pandas-in-parallel/

  import pandas as pd

  pd.__version__

Creating Dataframes

  df = pd.DataFrame()

select multiple columns as a dataframe from a bigger dataframe:

  df2 = df[['Id', 'team', 'winPlacePerc']]

select a single column as a dataframe:

  df2 = df[['name']] 
#double square brackets make the results dataframe, 
#single makes it series

pandas axis:

  axis 1 = columns, axis 0 = rows

get a series from a dataframe column filtered by another column:

  zero_names = df[df["weights"] < 10000]["names"]

turn it into a list:

  zn_list = zero_names.tolist()

N-way split:

  X_train, X_test, y_train, y_test, r_train, r_test = 
   model_selection.train_test_split(X, y, r, 
      test_size=0.25, random_state=99)

load only part of data: nrows

Accessing Values

find row at 1111th index in dataframe:

  df.iloc[1111]

find row with value 1111 for index in dataframe:

  df.loc[1111]

set value in cell (stackoverflow):

  children_df.at[555, "visited"] = True 
    #555 here refers to row index

modify row while iteration over it:

  for index, row in df.iterrows():
    df.at[index,'open_change_p1'] = day_change_op1

drop last 5 columns

  df_train.drop(df_train.columns[-5:], axis=1)

print numpy data types in multidimensional array (in jupyter return value is printed):

[type(row) for row in values[0]]

Aggregates

calculate mean for ‘team’ and ‘winPlacePerc’ columns, after grouping them by match id and group id:

  agg_cols = ['groupId', 'matchId', 'team', 'winPlacePerc']
  df_mean = df_test[agg_cols].groupby(["groupId", "matchId"],
    as_index=False).mean().add_suffix("_mean")

run multiple such aggregations at once:

  agg = df_train.groupby(['matchId'])
    .agg({'players_in_team': ['min', 'max', 'mean', 'median']})

specify the name suffixes of the generated aggregation column names:

  agg_train = df_train[agg_cols].groupby(["groupId", "matchId"], 
                        as_index=False)
    .agg([('_min', 'min'), ('_max', 'max'), ('_mean', 'mean')])

multiple aggregations will create a 2-level column header (see df.head()). to change it into one level (for merge etc):

  mi = agg_train.columns #mi for multi-index
  ind = pd.Index([e[0] + e[1] for e in mi.tolist()])
  agg_train.columns = ind

custom aggregation function:

  def q90(x):
      return x.quantile(0.9)

  agg = df_train.groupby(['matchId'])
    .agg({'players_in_team': 
       ['min', 'max', 'mean', 'median', q90]})

create new column as number of rows in a group:

  agg = df_train.groupby(['matchId'])
          .size().to_frame('players_in_match')

group and sum column for groups:

  revenues = df_train.groupby("geoNetwork_subContinent")
     ['totals_transactionRevenue'].sum()
     .sort_values(ascending=False)

Merge

merge two dataframes (here df_train and agg) by a single column:

  df_train = df_train.merge(agg, how='left', on=['groupId'])

merge on multiple columns:

  dfg_train = dfg_train.merge(agg_train, how='left',
     on=["groupId", "matchId"])

set merge suffixes = ["", "_rank"] <- left and right side on overlapping columns

  dfg_test = dfg_test.merge(agg_test_rank, suffixes=["", "_rank"],
        how='left', on=["groupId", "matchId"])

above sets columns from dfg_test to have no suffix ("") and columns from agg_test_rank to have suffix "_rank"

merge columns by value:

  df_train['rankPoints'] = np.where(df_train['rankPoints'] < 0,
       df_train['winPoints'], df_train['rankPoints'])

->above sets value as winpoints if rankpoints is below zero, else rankpoints

merge by defining the column names to match on left and right:

  pd.merge(left, right, left_on='key1', right_on='key2')

merge types (the how=merge_type in pd.merge)(link):

  • inner: keep rows that match in both left and right
  • outer: keep all rows in both left and right
  • left: keep all rows from left and matching ones from right
  • right: keep all rows from right and matching ones from left

Rename, Delete, Compare Dataframes

rename columns to remove a suffix (here remove _mean):

  df_mean = df_mean.rename(columns={'groupId_mean': 'groupId',
               'matchId_mean': 'matchId'}) 

delete dataframes to save memory:

  del df_agg

find all columns in one dataframe but not in another

  diff_set = set(train_df.columns).difference(set(test_df.columns))
  print(diff_set)

find all columns in one both dataframes and drop them

  common_set = set(train_df.columns).intersection(set(FEATS_EXCLUDED))
  train_df = train_df.drop(common_set, axis=1)

drop rows with index in list (stackoverflow):

  df.drop(df.index[[1,3]])

replace nulls/nans with 0:

  X.fillna(0)
  X_test.fillna(0)

only for specific columns:

  df[['a', 'b']] = df[['a','b']].fillna(value=0)

drop rows with null values for specific column:

  df_train.dropna(subset=['winPlacePerc'], inplace=True)

drop columns B and C:

  df.drop(['B', 'C'], axis=1)
  df.drop(['B', 'C'], axis=1, inplace = True)

Dataframe Statistics

this df has 800k rows (values) and 999 columns (features):

  df_train_subset.shape
  (800000, 999)

data types:

  df_train.dtypes

number of rows, columns, memory size (light, fast):

  df.info()

statistics per feature/column (cpu/mem heavy):

  df.describe()

number of unique values:

  df.nunique()

bounds:

  df.min()
  df.max()

replace infinity with 0. esp scalers can have issues with inf:

    X[col] = X[col].replace(np.inf, 0)

replace positive and negative inf with nan:

  df_pct.replace([np.inf, -np.inf], np.nan)

number of non-nulls per row in dataframe:

  df_non_null = df.notnull().sum(axis=1)

find the actual rows with null values:

  df_train[df_train.isnull().any(axis=1)]

number of unique values

  df.nunique()

number of times different values appear in column:

  df_train["device_operatingSystem"].value_counts()

sorted by index (e.g., 0,1,2,…)

  phase_counts = df_tgt0["phase"].value_counts().sort_index()

number of nans per column in dataframe:

  df.isnull().sum()

find columns with only one unique value

  const_cols = [c for c in train_df.columns 
                if train_df[c].nunique(dropna=False)==1 ]

drop them

  train_df = train_df.drop(const_cols, axis=1)

replace outlier values over 3std with 3std:

  upper = df_train['totals_transactionRevenue'].mean()
            +3*df_train['totals_transactionRevenue'].std()
  mask = np.abs(df_train['totals_transactionRevenue']
           -df_train['totals_transactionRevenue'].mean()) >
           (3*df_train['totals_transactionRevenue'].std())
  df_train.loc[mask, 'totals_transactionRevenue'] = upper

or use zscore:

  df['zscore'] = (df.a - df.a.mean())/df.a.std(ddof=0)

from scipy (stackoverflow)

  from scipy import stats
  import numpy as np
  z = np.abs(stats.zscore(boston_df))
  print(z)

show groupby object data statistics for each column by grouped element:

  grouped.describe()

create dataframe from classifier column names and importances (where supported), sort by weight:

  df_feats = pd.DataFrame()
  df_feats["names"] = X.columns
  df_feats["weights"] = clf.feature_importances_
  df_feats.sort_values(by="weights")

lag columns to show absolute value they changed over rows:

  for col in input_col_names:
    df_diff[col] = df_sig[col].diff().abs()

unique datatypes in dataframe:

  train_df.dtypes.unique()

show columns of selected type:

  train_df.select_dtypes(include=['float64'])

Numpy

Get numpy matrix from dataframe:

  data_diffs = df_diff.values

get column from numpy matrix as row

  for sig in range(0, 3):
      data_sig = data_measure[:,sig]

slice numpy array:

  bin_data_raw = data_sig[i:i + bin_size]

calculate statistics over numpy array:

  bin_avg_raw = bin_data_raw.mean()
  bin_sum_raw = bin_data_raw.sum()
  bin_std_raw = bin_data_raw.std()
  bin_percentiles = np.percentile(bin_data_raw, [0, 1, 25, 50, 75, 99, 100])
  bin_range = bin_percentiles[-1] - bin_percentiles[0]
  bin_rel_perc = bin_percentiles - bin_avg_raw

count outliers at different scales:

  outliers = []
  for r in range(1, 7):
      t = bin_std_diff * r
      o_count = np.sum(np.abs(bin_data_diff-bin_avg_diff) >= t)
      outliers.append(o_count)

concatenate multiple numpy arrays into one:

  bin_row = np.concatenate([raw_features, diff_features, bin_percentiles, bin_rel_perc, outliers])

limit values between min/max:

  df_sig.clip(upper=127, lower=-127)

value range in column (ptp = peak to peak):

  df.groupby('GROUP')['VALUE'].agg(np.ptp)

replace nan:

  my_arr[np.isnan(my_arr)] = 0

Time-Series

create values for percentage changes over time (row to row):

  df_pct = pd.DataFrame()
  for col in df_train_subset.columns[:30]:
      df_pct['pct_chg_'+col] =
          df_train_subset[col].pct_change().abs()

pct_change() gives the change in percentage over time, abs() makes it absolute if just looking for overall change as in negative or positive.

also possible to set number of rows to count pct_change over:

  df_pct['pct_chg_'+col] =
          df_train_subset[col].pct_change(periods=10)

pct_change on a set of items with specific values:

  df['pct_chg_open1'] = df.groupby('assetCode')['open']
           .apply(lambda x: x.pct_change())

TODO: try different ways to calculate ewm to see how this all works ewm ([stackoverflow]( EWMA: https://stackoverflow.com/questions/37924377/does-pandas-calculate-ewm-wrong)):

  df["close_ewma_10"] = df.groupby('assetName')['pct_chg_close1']
            .apply(lambda x: x.ewm(span=10).mean())

shift by 10 backwards

  y = mt_df["mket_close_10"].shift(-10)

Date Types

convert seconds into datetime

  start_col = pd.to_datetime(df_train.visitStartTime, unit='s')

parse specific columns as specific date types:

  df_train = pd.read_csv("../input/train-flat.csv.gz",
                         parse_dates=["date"], 
                         dtype={'fullVisitorId': 'str',
                              'date': 'str'
                             },
                        )

or if need to parse specific format:

  pd.to_datetime('13000101', format='%Y%m%d', errors='ignore')

access elements of datetime objects:

  data_df["weekday"] = data_df['date'].dt.weekday
  data_df["hour"] = data_df['date'].dt.hour

Data Type Manipulations

one-hot encode / convert categorical:

  dfg_train = pd.get_dummies( dfg_train, columns = ['matchType'] )
  dfg_test = pd.get_dummies( dfg_test, columns = ['matchType'] )

set category value by range:

  here 1 = solo game, 2 = duo game, 3 = squad, 4 = custom
  df_train['team'] = 
     [1 if i == 1 else 2 if i == 2 else 4 if i > 4 else 3 
              for i in df_train["players_in_team_q90"]]

calculate value over two columns and make it a new column:

  dfg_train['killsNorm'] = 
      dfg_train['kills_mean']*
      ((100-dfg_train['players_in_match'])/100 + 1)

  data_df['hit_view_ratio'] =
      data_df['totals_pageviews']/data_df['totals_hits']

mark a set of columns as category type:

  for col in X_cat_cols:
      df_train[col] = df_train[col].astype('category')

set category value based on set of values in column:

  X_test['matchType'] = X_test['matchType'].astype('str')
  X_test.loc[X_test['matchType'] == "squad-fpp", 'matchType'] =
     "squad"
  X_test['matchType'] = X_test['matchType'].astype('category')

how to get the indices from a dataframe as a list (e.g., collect a subset):

  list(outlier_collection.index.values)

to see it all on one line in Jupyter (easier copying, e.g. to drop all in list):

print(list(outlier_collection.index.values))

Jupyter

show dataframe as visual table in notebook (stackoverflow):

  from IPython.display import display, HTML

  display(df1)
  display(HTML(df2.to_html()))

pycharm notebook (jetbrains help):

correlations, unstacking correlation matrix link

Memory Reducer (From Kaggler:

def reduce_mem_usage(df):
    """ iterate through all the columns of a dataframe and modify the data type
        to reduce memory usage.        
    """
    start_mem = df.memory_usage().sum() / 1024**2
    print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
    
    for col in df.columns:
        col_type = df[col].dtype
        
        if col_type != object and col_type.name != 'category':
            #print(col_type)
            c_min = df[col].min()
            c_max = df[col].max()
            if str(col_type)[:3] == 'int':
                if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
                    df[col] = df[col].astype(np.int8)
                elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
                    df[col] = df[col].astype(np.int16)
                elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
                    df[col] = df[col].astype(np.int32)
                elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
                    df[col] = df[col].astype(np.int64)  
            else:
                if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
                    df[col] = df[col].astype(np.float16)
                elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
                    df[col] = df[col].astype(np.float32)
                else:
                    df[col] = df[col].astype(np.float64)
        else:
            df[col] = df[col].astype('category')

    end_mem = df.memory_usage().sum() / 1024**2
    print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
    print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
    
    return df

Porting Naivecoin tutorial to Golang (first parts)

Recently I got interested in blockchains, and how do they work in practice. Found a nice tutorial called Naivecoin: a tutorial for building a cryptocurrency. There is also a nice book on the topic called Mastering Bitcoin: Programming the Open Blockchain. Had to get that too, didn’t I.

So to get a better understanding of the topic, I tried implementing the tutorial in Go. Using a different programming environment requires me to look up the actual implementation and try to understand it, as opposed to just copy-pasting all the code. I tried the parts until part 3, and set my code up on Github, hopefully I will update it with the remaining parts of the Naivecoin tutorial, and other related topics, sometime later.

The Naivecoin first part focuses on building the basic blockchain. To start, a block structure is needed to describe the blockchain:

type Block struct {
	Index        int  //the block index in the chain
	Hash         string //hash for this block
	PreviousHash string //hash for previous block
	Timestamp    time.Time //time when this block was created
	Data         string //the data in this block. could be anything. not really needed since real data is transaction but for fun..
	Transactions []Transaction  //the transactions in this block
	Difficulty	 int //block difficulty when created
	Nonce		 int //nonce used to find the hash for this block
}

A blockchain is a long list (or a chain) of blocks. The Block structure above has various fields to describe its position in the chain, its contents, and metadata used to provide assurance over the chain validity.

  • Index: The number of the block on the blockchain. So 1st block is 1, 10th block is 10.
  • Hash: Cryptographic hash of all the fields in the block together. Verifies the block has not been modified since creation (e.g., change transaction address).
  • PreviousHash: Should match the Hash value of the previous block in the blockchain. Useful to verify chain integrity I am sure.
  • Timestamp: Time when this block was created. Used for hash variation, and to keep the creation of blocks within a defined time limit (to make some attacks harder).
  • Data: The blockchain is supposed to store data in an “immutable ledger”. I guess the typical blockchain application is still cryptocurrencies, where the data is the Transactions part. I put this extra data field in just to play with other data if I feel like it.
  • <Transactions: List of transactions included in this block. Moving coins from one person to the other.
  • Difficulty: The mining difficulty when this block was created. Useful to check the hash validity etc.
  • Nonce: A value to vary in trying to get a hash that matches the set difficulty. In case of bigger difficulty, may be required. But for this tutorial implementation I guess this is fine.

The blockchain is composed of these blocks, each referencing the previous one, in a chain. Thus the blockchain, woohoo. So, first how to calculate the hash for a block:

//calculate hash string for the given block
func hash(block *Block) string {
	indexStr := strconv.Itoa(block.Index)
	timeStr := strconv.FormatUint(uint64(block.Timestamp.Unix()), 16) //base 16 output
	nonceStr := strconv.Itoa(block.Nonce)
	diffStr := strconv.Itoa(block.Difficulty)
	txBytes, _ := json.Marshal(block.Transactions)
	txStr := string(txBytes)
	//this joins all the block elements to one long string with all elements appended after another, to produce the hash
	blockStr := strings.Join([]string{indexStr, block.PreviousHash, timeStr, diffStr, block.Data, txStr, nonceStr}, " ")
	bytes := []byte(blockStr)
	hash := sha256.Sum256(bytes)
	s := hex.EncodeToString(hash[:]) //encode the Hash as a hex-string. the [:] is slicing to match datatypes in args
	return s
}

I think Bitcoin uses something like a Merkle tree to combine elements in a block. But in the above, all the elements in a block are simply turned into strings, concatenated into a single long string, and this string is hashed. So again, we could maybe do better but it works well for a tutorial such as Naivecoin.

Now, to create the blocks. As visible in the structure above, each block has to reference the previous ones to create the chain. And to provide assurance over the overall chain, hashes of the block and of the previous block are provided. But the chain has to start somewhere. The first block in the chain is called the genesis block. It has no previous hash, as it has no previous block. So we create it specifically first:

//create genesis block, the first one on the chain to bootstrap the chain
func createGenesisBlock(addToChain bool) Block {
	genesisTime, _ := time.Parse("Jan 2 15:04 2006", "Mar 15 19:00 2018")
	block := Block{1, "", "0", genesisTime, "Teemu oli täällä", nil,1, 1}
	hash := hash(&block)
	block.Hash = hash
	if addToChain {
		globalChain = append(globalChain, block)
	}
	return block
}

More generally, to create non-genesis blocks:

//create a block from the given parameters, and find a nonce to produce a hash matching the difficulty
//finally, append new block to current chain
func createBlock(txs []Transaction, blockData string, difficulty int) Block {
	chainLength := len(globalChain)
	previous := globalChain[chainLength-1]
	index := previous.Index + 1
	timestamp := time.Now().UTC()
	nonce := 0
	newBlock := Block{index, "", previous.Hash, timestamp, blockData, txs, difficulty, nonce}
	for {
		hash := hash(&newBlock)
		newBlock.Hash = hash
		if verifyHashVsDifficulty(hash, difficulty) {
			addBlock(newBlock)
			return newBlock
		}
		nonce++
		newBlock.Nonce = nonce
	}
}

The above code takes the block transactions, data and the current difficulty of the chain as parameters. The rest of the information required to create a block is taken from the previous block (block index, previous block hash) or system clock (current timestamp). Finally, it loops the different nonce values until it finds a hash matching the current difficulty level. Like I mentioned before, it might be a bit simplified, but this is certainly good enough for a tutorial such as the Naivecoin.

So, in this case, to verify the block difficulty:

func verifyHashVsDifficulty(hash string, difficulty int) bool {
	prefix := strings.Repeat("0", difficulty)
	return strings.HasPrefix(hash, prefix)
}

This is quite simple, it just measures that the given hash-string starts with a given number of zeroes. In this case the measure of difficulty is the number of zeroes the hash should start with. In a more general case, I believe the difficulty can be a number under which the hash should fall (2nd top answer at the time of writing this). That gives you much more granularity to define the difficulty.. But, again, no need to go overly complex on things in a Naivecoin tutorial.

Similarly, adding a block to the blockchain is quite simple:

//add a new block to the existing chain
func addBlock(block Block) {
	chainLength := len(globalChain)
	previousBlock := globalChain[chainLength-1]
	block.PreviousHash = previousBlock.Hash
	globalChain = append(globalChain, block)
	for _, tx := range block.Transactions {
		addTransaction(tx)
	}
	//todo: check block hash matches difficulty
}

So as I commented above, I should also check that the new block matches the required difficulty. Would not be too hard but I will leave that update for the next version.

Adding the block also requires adding all the transactions within that block to the list of active transactions:

func addTransaction(tx Transaction) {
	oldTx := findUnspentTransaction(tx.Sender, tx.Id)
	if oldTx >= 0 {
		print("transaction already exists, not adding: ", tx.Id)
		return
	}
	allTransactions = append(allTransactions, tx)
	for _, txIn := range tx.TxIns {
		deleteUnspentTransaction(tx.Sender, txIn.TxId)
	}
	for idx, txOut := range tx.TxOuts {
		utx := UnspentTxOut{tx.Id, idx, txOut.Address, txOut.Amount}
		unspentTxOuts = append(unspentTxOuts, utx)
	}
}

So, as the Naivecoin tutorial so nicely explains, each transaction has two types of “sub-transactions” as I call them here. Not an official term or anything. But these are:

  • TxIn: Transaction inputs.
  • TxIn: Transaction outputs.

I found the explanations related to how these work very confusing to start with. Eventually I got it, but it is not the most intuitive thing. Especially with the terminology of Transaction (Tx), TxIn, TxOut, … Especially since the TxIn and TxOut are mostly the same thing, but expressed in a different way. Well, that is my undestanding anyway. Please do correct me.

A TxOut (would it be correct to call it a transaction output?) is what creates “coins” to spend. Sending coins to someone creates a TxOut. Since a single transaction can contain multiple TxOut instances, it just means you can send coins to multiple people in a single transaction. Or multiple wallets that is. The most common seems to be to send coins to someone else and back to yourself. Why is this?

To create TxOut’s you need a matching amount of TxIn’s. Or the amount of coins referenced in each should match. This check should actually be a part of the addTransaction function above. To check that the transaction is valid by checking that the input and output amounts add up to the same sums. But you can only add existing TxOut instances to a transaction as TxIn’s. So if you got a TxOut giving you 100 coins, you can only add that TxOut to your next transaction as a TxIn if you want to send someone coins. So the TxIn has to be 100 coins in this case. What if you want to send only 50 coins? Then you put the single TxIn with 100 coins, but create 2 TxOut’s for that transaction. One to send 50 coins to he received, and another to send the remaining 50 coins to yourself. This way the balances match again. Confusing? Yes. Check pretty pictures some way down this post.

Of course, if you send coins to an address that is not used, you will “burn” those coins. No-one can ever access them, because no-one knows the private key to match the public key used to sign that TxOut. You could maybe make a lot of money if you could find a key to some of the bigger coin-burn addresses. But I digress.

How do the coins initially come into existence if you can only send coins by referencing an existing TxOut? Where do the first TxOut come from? It has to all be bootstrapped somehow, right? This is what is called a Coinbase transaction. No, not according to the cryptocurrency selling company. They took the name from the term, not the other way around. A coinbase TxIn might look something like this:

var COINBASE_AMOUNT = 1000

func createCoinbaseTx(address string) Transaction {
	var cbTx Transaction

	var txIn TxIn
	txIn.TxIdx = len(globalChain)
	cbTx.TxIns = append(cbTx.TxIns, txIn)

	var txOut TxOut
	txOut.Amount = COINBASE_AMOUNT
	txOut.Address = address
	cbTx.TxOuts = append(cbTx.TxOuts, txOut)

	cbTx.Id = calculateTxId(cbTx)
	cbTx.Signature = "coinbase"

	return cbTx
}

The above code creates the special transaction called the coinbase transaction. In proof-of-work type solutions, such as in the Naivecoin tutorial, this is added to each mined block. So when a miner finds a hash for a new block (thus “creating” it), they get the reward for the block. This reward is paid as the coinbase transaction. Each block can have one, and the miner can create it in the block. I would expect the miner then puts their own address in the transaction as the receiver. For the TxOut address. The TxIn in this case comes from nowhere, from thin air, or from a made up input. This is how the money is made here..

To my understanding, the way coinbase transactions are verified is simply by each node in the network accepting only a single coinbase transaction in a block, with specific parameters. These should match the number of coins to be rewarded according to the coin specification, and the defined coinbase signature. I believe in Bitcoin you can set many of the other fields as you like. For example, people can hide messages in the TxIn address or some other fields of the coinbase transaction. Because they do not need to match anything real. Once the coinbase amount is issued to a miner, they can then send coins using the coinbase TxOut as part of their new transaction.

The related data structures:

type TxOut struct {
	Address string	//receiving public key
	Amount int		//amount of coin units to send/receive
}

type TxIn struct {
	TxId      string	//id of the transaction inside which this TxIn should be found (in the list of TxOut)
	TxIdx     int		//index of TxOut this refers to inside the transaction
}

type UnspentTxOut struct {
	TxId    string	//transaction id
	TxIdx   int		//index of txout in transaction
	Address string  //public key of owner
	Amount  int		//amount coin units that was sent/received
}

//transaction is from a single person/entity
//inputs from that entity use funds, outputs show how much and where
//all the time there should be some list kept and updated based on this
type Transaction struct {
	Id string
	Sender string //the address/public key of the sender
	Signature string	//signature including all txin and txout. in this case we sign Transaction.Id since that already has all the TxIn and TxOut
	TxIns []TxIn
	TxOuts []TxOut
}

Each block contains a set of transactions, which contain a set of TxIn and TxOut.

  • The TxOut defines the address (encoding of recipients public key) of the recipient, and the amount of coins to send.
  • The TxIn defines a reference to a previous TxOut. So an identifier to find the transaction which contains it, and the index of the TxOut within that transaction.
  • The transaction itself contains the address (public key) of the sender. Since each TxIn and TxOut is contained in the transaction, and references that transaction, the owner can be checked against the address of the sender in the transaction. So the recipient in the “spent” TxOut should match the sender of this new transaction.
  • A prespecified way of calculating the transaction id is defined in the coin specification. This is then used to create and encode a hash value for the transaction id. I guess most real coins would use some form of Merkle-trees as linked high above. In this case the naivecoin simply adds the information on the sender and contained TxIn and TxOut instances into a long string and hashes that.
  • The signature is used to sign the transaction id (already containing all the transaction data in the hash) with the sender private key, using ECDSA signatures, as described in my previous post. The authenticity of the transaction and all its contents can then be verified by using the public key of the sender as embedded in the sender address to verify the signature. Brilliant stuff.

In pictures this looks something like this:

tx2

In the above example (in the figure above), Bob is sending 550 coins to Alice. To do this, Bob needs TxIn’s that sum up to minimum of 550. He has a set that sums up to 600. So these are used as the TxIn’s in this transaction. As 600 is more than 550, there are two TxOut’s created. One assigns the 550 coins to Alice, and the other 50 coins back to Bob. The blockchain tracks coin amounts using the TxOut instances stored within transactions, so it can only “spend” a full TxOut. This is why the “change” (50 coins in this example) generates a completely new TxOut.

The wording in the figure above for “deleteing” inputs simply refers to the TxIn’s being marked as used, and not being able to use them again. So the previous TxOut they refer to (from previous transactions that gave coins to Bob) are marked as used. As the wallet application and blockchain nodes track which TxOut are unused (not referenced by any accepted TxIn), it can count the balance of a user as a sum of their unspent TxOut.

The wording for “creating” new TxOut in the figure above refers to adding new TxOut to the list of unspent TxOut. In this case it adds one for Bob (50 coins) and one for Alice (550 coins).

In the code showing the TxOut, TxIn, and UnspentTxOut higher above the structure of each of these is shown. TxId refers each TxOut and TxIn to a specific transaction where the associated TxOut was created. So also TxIn refers to TxOut but serves as a way to mark that TxOut as “spent”. The reference consists of the transaction id (each transaction in the blockchain having a unique id), and the TxIdx refers to the index of the TxOut within a transaction. Since a transaction can have multiple TxOut, this is just the index in the list of TxOut within the transaction to identify which one the TxOut exactly is.

Each TxOut is assigned to a specific address, and the address has the recipient public key. Each transaction is signed using the senders private key, and this signature can be verified using the signers public key. Thus each TxIn can be verified to be spendable by the person who signed the transaction containing the TxIn. Following the TxIn reference to the TxOut it is referring to, and looking up the address it was assigned to gives the public key. By checking that this public key matches the private key used to sign the new transaction wanting the spend that TxIn, it can be verified that the person creating the transaction is in possession of the private key associated to that public key.

Proper transaction verification should check that all TxIn match the transaction signature, there are sufficient number of the TxIn, and that the TxIn total sum matches the total sum of TxOut. Probably much more too, but that is a good start.

Much confusing all this is. Hope you, dear reader, if you exist, find more information to understand my ramblings and to correct me.

After the transaction in the figure above, the end result looks something like this:

tx3

With all this in place, we can write the code to send coins:

func sendCoins(privKey *ecdsa.PrivateKey, to string, count int) Transaction {
	from := encodePublicKey(privKey.PublicKey)
	//TODO: error handling
	txIns, total := findTxInsFor(from, count)
	txOuts := splitTxIns(from, to, count, total)
	tx := createTx(privKey, txIns, txOuts)
	return tx
}

func findTxInsFor(address string, amount int) ([]TxIn, int) {
	balance := 0
	var unspents []TxIn
	for _, val := range unspentTxOuts {
		if val.Address == address {
			balance += val.Amount
			txIn := TxIn{val.TxId, val.TxIdx}
			unspents = append(unspents, txIn)
		}
		if balance >= amount {
			return unspents, balance
		}
	}
	return nil, -1
}

func splitTxIns(from string, to string, toSend int, total int) []TxOut {
	diff := total - toSend
	txOut := TxOut{to, toSend}
	var txOuts []TxOut
	txOuts = append(txOuts, txOut)
	if diff == 0 {
		return txOuts
	}
	txOut2 := TxOut{from, diff}
	txOuts = append(txOuts, txOut2)
	return txOuts
}

func createTx(privKey *ecdsa.PrivateKey, txIns []TxIn, txOuts []TxOut) Transaction {
	pubKey := encodePublicKey(privKey.PublicKey)
	tx := Transaction{"",  pubKey,"", txIns, txOuts}

	signTxIns(tx, privKey)

	tx.Id = calculateTxId(tx)
	tx.Signature = signData(privKey, []byte(tx.Id))
	return tx
}

The full code for my Go port of the naivecoin tutorial (first parts) is on my Github.

There are various other concepts I would still like to look into. Ways to be able to effectively search the whole blockchain and all transactions in it effectively (to validate transactions, build block explorers, etc.) would be interesting. I understand some type of embedded databases may be used. And endlessly argued about. Ah, just when you though the programming language flame-wars of past are no longer, you find it all again. Much stability in argument, such peace it brings. In my Rock’n chair I kek at all things past and present. Sorry, just trying to learn to talk like the internet and crypto people do. On a lighter note, other topics of interest to look into include robust peer-to-peer connections, ring signatures, proof of stake, etc.

That about sums it up for my experiment for now. Do let me know what are all the things I got wrong.

Trying to learn ECDSA and GoLang

Recently I have been looking at the Naivecoin tutorial, and trying to implement it in Go to get an idea of how blockchains really work, and to learn some more Go as well. The tutorial code is in Javascript, and translating it to Go has been mostly straightforward. However, porting the third part with transactions was giving me some issues. I had trouble figuring how to port the signature part. This is tutorial the code in Javascript:

    const key = ec.keyFromPrivate(privateKey, 'hex');
    const signature: string = toHexString(key.sign(dataToSign).toDER());

This is nice and simple, just suitable for a high-level framework and tutorial. However, to implement it myself in Go, …

The JS code above takes the private key as a hex formatted string, and parses that into a Javascript PrivateKey object. This key object is then used to sign the “dataToSign”, the signature is formatted as something called “DER”, and the result is formatted as a hex string. What does all that mean?

The tutorial refers to Elliptic Curve Digital Signature Algorithm (ECDSA). The data to sign in this case is the SHA256 hash of the transaction ID. So how to do this in Go? Go has an ecdsa package with private keys, public keys, and functions to sign and verify data. Sounds good. But the documentation is quite sparse, so how do I know how to properly use it?

To try it, I decided to first write a program in Java using ECDSA signatures, and use it to compare to the results of the Go version I would write. This way I would have another point of reference to compare my results to, and to understand if I did something wrong. I seemed to find more information about the Java implementation, and since I am more familiar with Java in general..

So first to generate the keys to use for signatures in Java:

	public static String generateKey() throws Exception {
		KeyPairGenerator keyGen = KeyPairGenerator.getInstance("EC");
		SecureRandom random = SecureRandom.getInstance("SHA1PRNG");

		keyGen.initialize(256, random); //256 bit key size

		KeyPair pair = keyGen.generateKeyPair();
		ECPrivateKey priv = (ECPrivateKey) pair.getPrivate();
		PublicKey pub = pair.getPublic();

		//actually also need public key, but lets get to that later...
		return priv;
	}

Above code starts with getting an “EC” key-pair generator, EC referring to Elliptic Curve. Then get a secure random number generator instance, in this case one based on SHA1 hash algorithm. Apparently this is fine, even if SHA1 is not recommended for everything these days. Not quite sure about the key size of 256 given, but maybe have to look at that later.. First to get this working.

The “priv.Encoded()” part turns the private key into a standard encoding format as a byte array. Base64 encode it for character representation, to copy to the Go version..

Next, to sign the data (or message, or whatever we want to sign..):

	public static byte[] signMsg(String msg, PrivateKey priv) throws Exception {
		Signature ecdsa = Signature.getInstance("SHA1withECDSA");

		ecdsa.initSign(priv);

		byte[] strByte = msg.getBytes("UTF-8");
		ecdsa.update(strByte);

		byte[] realSig = ecdsa.sign();

		System.out.println("Signature: " + new BigInteger(1, realSig).toString(16));

		return realSig;
	}

Above starts with gettings a Java instance of the ECDSA signature algorithm, with type “SHA1withECDSA”. I spent a good moment wondering what all this means, to be able to copy the functionality into the Go version. So long story short, first the data is hashed with SHA1 and then this hash is signed with ECDSA. Finally, the code above prints the signature bytes as a hexadecimal string (byte array->BigInteger->base 16 string). I can then simply copy-paste this hex-string to Go to see if I can get signature verification to work in Go vs Java. Brilliant.

First I tried to see that I can get the signature verification to work in Java:

	private static boolean verifySignature(PublicKey pubKey,String msg, byte[] signature) throws Exception {
		byte[] message = msg.getBytes("UTF-8");
		Signature ecdsa = Signature.getInstance("SHA1withECDSA");
		ecdsa.initVerify(pubKey);
		ecdsa.update(message);
		return ecdsa.verify(signature);
	}

The code above takes the public key associated with the private key that was used to sign the data (called “msg” here). It creates the same type of ECDSA signature instance as the signature creation previously. This is used to verify the signature is valid for the given message (data). So signed with the private key, verified with the public key. And yes, it returns true for the signed message string, and false otherwise, so it works. So now knowing I got this to work, I can try the same in Go, using the signature, public key, and private key that was used in Java. But again, the question. How do I move these over?

Java seems to provide functions such as key.getEncoded(). This gives a byte array. We can then Base64 encode it to get a string (I believe Bitcoin etc. use Base56 but the same idea). So something like this:

		//https://stackoverflow.com/questions/5355466/converting-secret-key-into-a-string-and-vice-versa
		byte[] pubEncoded = pub.getEncoded();
		String encodedPublicKey = Base64.getEncoder().encodeToString(pubEncoded);
		String encodedPrivateKey = Base64.getEncoder().encodeToString(priv.getEncoded());
		System.out.println(encodedPrivateKey);
		System.out.println(encodedPublicKey);

Maybe I could then take the output I just printed, and decode that into the key in Go? But what is the encoding? Well, the JDK docs say getEncoded() “Returns the key in its primary encoding format”. And what might that be? Well some internet searching and debugger runs later I come up with this (which works to re-create the keys in Java):

	public static PrivateKey base64ToPrivateKey(String encodedKey) throws Exception {
		byte[] decodedKey = Base64.getDecoder().decode(encodedKey);
		PKCS8EncodedKeySpec spec = new PKCS8EncodedKeySpec(decodedKey);
		KeyFactory factory = KeyFactory.getInstance("EC");
		PrivateKey privateKey = factory.generatePrivate(spec);
		return privateKey;
	}

	public static PublicKey base64ToPublicKey(String encodedKey) throws Exception {
		byte[] decodedKey = Base64.getDecoder().decode(encodedKey);
		X509EncodedKeySpec spec = new X509EncodedKeySpec(decodedKey);
		KeyFactory factory = KeyFactory.getInstance("EC");
		return publicKey;
	}

So the JDK encodes the private key in PKCS8 format, and the public key in some kind of X509 format. X509 seems to be related to certificates, and PKCS refers to “Public Key Cryptography Standards”, of which there are several. Both of these seem a bit complicated, as I was just looking to transfer the keys over. Since people can post those online for various crypto tools as short strings, it cannot be that difficult, can it?

I tried to look for ways to take PKCS8 and X509 data into Go and transform those into private and public keys. Did not get me too far with that. Instead, I figured there must be only a small part of the keys that is needed to reproduce them.

So I found that the private key has a single large number that is the important bit, and the public key can be calculated from the private key. And the public key in itself consists of two parameters, the x and y coordinates of a point (I assume on the elliptic curve). I browsed all over the internet trying to figure this all out, but did not keep records of all the sites I visited, so my references are kind of lost. However, here is one description that just so states the integer and point part. Anyway, please let me know of any good references for a non-mathematician like me to understand it if you have any.

To get the private key value into suitable format to pass around in Java:

	//https://stackoverflow.com/questions/40552688/generating-a-ecdsa-private-key-in-bouncy-castle-returns-a-public-key
	private static String getPrivateKeyAsHex(PrivateKey privateKey) {
		ECPrivateKey ecPrivateKey = (ECPrivateKey) privateKey;
		byte[] privateKeyBytes = ecPrivateKey.getS().toByteArray();
		String hex = bytesToHex(privateKeyBytes);
		return hex;
	}

The “hex” string in the above code is the big integer value that forms the basis of the private key. This can now be passed, backed up, or whatever we desire. Of course, it should be kept private so no posting it on the internet.

For the public key:

	private static String getPublicKeyAsHex(PublicKey publicKey) {
		ECPublicKey ecPublicKey = (ECPublicKey) publicKey;
		ECPoint ecPoint = ecPublicKey.getW();

		byte[] affineXBytes = ecPoint.getAffineX().toByteArray();
		byte[] affineYBytes = ecPoint.getAffineY().toByteArray();

		String hexX = bytesToHex(affineXBytes);
		String hexY = bytesToHex(affineYBytes);

		return hexX+":"+hexY;
	}

The above code takes the X and Y coordinates that make up the public key, combines them, and thus forms a single string that can be passed to get the X and Y for public key. A more sensible option would likely just create a single byte array with the length of the first part as first byte or two. Something like [byte count for X][bytes of X][bytes of Y]. But the string concatenation works for my simple example to try to understand it.

And then there is one more thing that needs to be encoded and passed between the implementations, which is the signature. Far above, I wrote the “signMsg()” method to build the signature. I also printed the signature bytes out as a hex-string. But what format is the signature in, and how do you translate it to another platform and verify it? It turns out Java gives the signatures in ASN.1 format. There is a good description of the format here. It’s not too complicated but how would I import that into Go again? I did not find any mention of this in the ECDSA package for Go. By searching with ASN.1 I did finally find an ASN.1 package for Go. But is there a way to do that without these (poorly documented) encodings?

Well, it turns out that ECDSA signatures can also be described by using just two large integers, which I refer to here as R and S. To get these in Java:

	public static byte[] signMsg(String msg, PrivateKey priv) throws Exception {
		Signature ecdsa = Signature.getInstance("SHA1withECDSA");

		ecdsa.initSign(priv);

		byte[] strByte = msg.getBytes("UTF-8");
		ecdsa.update(strByte);

		byte[] realSig = ecdsa.sign();

		System.out.println("R: "+extractR(realSig));
		System.out.println("S: "+extractS(realSig));

		return realSig;
	}

	//https://stackoverflow.com/questions/48783809/ecdsa-sign-with-bouncycastle-and-verify-with-crypto
	public static BigInteger extractR(byte[] signature) throws Exception {
		int startR = (signature[1] & 0x80) != 0 ? 3 : 2;
		int lengthR = signature[startR + 1];
		return new BigInteger(Arrays.copyOfRange(signature, startR + 2, startR + 2 + lengthR));
	}

	public static BigInteger extractS(byte[] signature) throws Exception {
		int startR = (signature[1] & 0x80) != 0 ? 3 : 2;
		int lengthR = signature[startR + 1];
		int startS = startR + 2 + lengthR;
		int lengthS = signature[startS + 1];
		return new BigInteger(Arrays.copyOfRange(signature, startS + 2, startS + 2 + lengthS));
	}

Above code takes the byte array of the signature, and parses the R and S from it as matching the ASN.1 specification I linked above. So with that, another alternative is again to just turn the R and S into hex-strings or Base56 encoded strings, combine them as a single byte-array and hex-string or base56 that, or whatever. But just those two values need to be passed to capture the signature.

Now, finally to parse all this data in Go and to verify the signature. First to get the private key from the hex-string:

	func hexToPrivateKey(hexStr string)  *ecdsa.PrivateKey {
		bytes, err := hex.DecodeString(hexStr)
		print(err)

		k := new(big.Int)
		k.SetBytes(bytes)

		priv := new(ecdsa.PrivateKey)
		curve := elliptic.P256()
		priv.PublicKey.Curve = curve
		priv.D = k
		priv.PublicKey.X, priv.PublicKey.Y = curve.ScalarBaseMult(k.Bytes())
		//this print can be used to verify if we got the same parameters as in Java version
		fmt.Printf("X: %d, Y: %d", priv.PublicKey.X, priv.PublicKey.Y)
		println()

		return priv
	}

The above code takes the hex-string, parses it into a byte array, creates a Go big integer from that, and sets the result as the value into the private key. The other part that is needed is the elliptic curve definition. In practice, one of a predefined set of curves is usually used, and the same curve is used for a specific purpose. So it can be defined as a constant, whichever is selected for the blockchain. In this case it is always defined as the P256 curve, both in the Java and Go versions. For example, Bitcoin uses the Secp256k1 curve. So I just set the curve and the big integer to create the private key. The public key (X and Y parameters) is calculated here from the private key, by using a multiplier function on the private key’s big integer.

To build the public key straight from the X and Y values passed in as hex-strings:

	func hexToPublicKey(xHex string, yHex string) *ecdsa.PublicKey {
		xBytes, _ := hex.DecodeString(xHex)
		x := new(big.Int)
		x.SetBytes(xBytes)

		yBytes, _ := hex.DecodeString(yHex)
		y := new(big.Int)
		y.SetBytes(yBytes)

		pub := new(ecdsa.PublicKey)
		pub.X = x
		pub.Y = y

		pub.Curve = elliptic.P256()

		return pub
	}

Again, base56 or similar would likely be more efficient representation. So the above code allows just to pass around the public key and not the private key, which is how it should be done. With the parameters X and Y passed, and the curve defined as a constant choice.

To create and verify the signature from the passed values:

	type ecdsaSignature struct {
		R, S *big.Int
	}

	func verifyMySig(pub *ecdsa.PublicKey, msg string, sig []byte) bool {
		//https://github.com/gtank/cryptopasta/blob/master/sign.go
		digest := sha1.Sum([]byte(msg))

		var esig ecdsaSignature
		asn1.Unmarshal(sig, &esig)
		//we can use these prints to compare to what we had in Java...
		fmt.Printf("R: %d , S: %d", esig.R, esig.S)
		println()
		return ecdsa.Verify(pub, digest[:], esig.R, esig.S)
	}

The above version reads the actual ASN.1 encoded signature that is produced by the Java default signature encoding. To get the functionality matching the Java “SHA1withECDSA” algorithm, I first have to hash the input data with SHA1 as done here. Since the Java version is a bit of a black box with just that string definition, I spent a good moment wondering about that. I would guess the same approach would apply for other choices such as “SHA256withECDSA” by just replacing the hash function with another. Alternatively, I can also just pass in directly the R and S values of the signature:

	func verifyMySig(pub *ecdsa.PublicKey, msg string, sig []byte) bool {
		//https://github.com/gtank/cryptopasta/blob/master/sign.go
		digest := sha1.Sum([]byte(msg))

		var esig ecdsaSignature
		esig.R.SetString("89498588918986623250776516710529930937349633484023489594523498325650057801271", 0)
		esig.S.SetString("67852785826834317523806560409094108489491289922250506276160316152060290646810", 0)
		fmt.Printf("R: %d , S: %d", esig.R, esig.S)
		println()
		return ecdsa.Verify(pub, digest[:], esig.R, esig.S)
	}

So in the above, the R and S are actually set from numbers passed in. Which normally would be encoded more efficiently, and given as parameters. However, this works to demonstrate. The two long strings are the integers for the R and S I printed out in the Java version.

Strangely, printing the R and S using the ASN.1 and the direct passing of the numbers gives a different value for R and S. Which is a bit odd. But they both verify the signature fine. I read somewhere that some transformations can be done on the signature numbers while keeping it valid. Maybe this is done as part of the encoding or something? I have no idea. But it works. Much trust such crypto skills I have.

func TestSigning(t *testing.T) {
	xHexStr := "4bc55d002653ffdbb53666a2424d0a223117c626b19acef89eefe9b3a6cfd0eb"
	yHexStr := "d8308953748596536b37e4b10ab0d247f6ee50336a1c5f9dc13e3c1bb0435727"
	ePubKey = hexToPublicKey(xHexStr, yHexStr)

	sig := "3045022071f06054f450f808aa53294d34f76afd288a23749628cc58add828e8b8f2b742022100f82dcb51cc63b29f4f8b0b838c6546be228ba11a7c23dc102c6d9dcba11a8ff2"
	sigHex, _ := hex.DecodeString(sig)
	ok := verifyMySig(ePubKey, "This is string to sign", sigHex)
	println(ok)
}

And finally, it works! Great 🙂

Playing with Pairwise Testing and PICT

A while back, I was doing some lectures on advanced software testing technologies. One topic was combinatorial testing. Looking at the materials, there are good and free tools out there to generate tests to cover various combinations. Still, I don’t see many people use them, and the materials out there don’t seem too great.

Combinatorial testing here refers to having 2-way, 3-way, up to N-way (sometimes they seem to call it t-way…) combinations of data values in different test cases. 2-way is also called pairwise testing. This simply refers to all pairs of data values appearing in different test cases. For example, if one test uses values “A” and “B”, and another uses a combination of “A” and “C”, you would have covered the pairs A+B and A+C but not B+C. With large numbers of potential values, the set of potential combinations can grow pretty huge, so finding a minimal set to cover all combinations can be very useful.

The benefits

There is a nice graph over at NIST, including a PDF with a broader description. Basically these show that 2-way and 3-way combinations already show very high gains in finding defects over considering coverage of single variables alone. Of course, things get a bit more complicated when you need to find all relevant variables in the program control flow, how to define what you can combine, all the constraints, etc. Maybe later. Now I just wanted to try the combinatorial test generation.

Do Not. Try. Bad Yoda Joke. Do Try.

So I gave combinatorial test generation a go. Using a nice and freely available PICT tool from Microsoft Research. It even compiles on different platforms, not just Windows. Or so they say on their Github.

Unexpectedly, compiling and getting PICT to run on my OSX was quite simple. Just “make” and “make test” as suggested on the main Github page. Probably I had most dependencies already from before, but anyway, it was surprisingly easy.

I made “mymodels” and “myoutputs” directories under the directory I cloned the git and compile the code to. Just so I could keep some order to my stuffs. So this is why the following example commands work..

I started with the first example on PICT documentation page. The model looks like this:


Type:          Primary, Logical, Single, Span, Stripe, Mirror, RAID-5

Size:          10, 100, 500, 1000, 5000, 10000, 40000

Format method: quick, slow

File system:   FAT, FAT32, NTFS

Cluster size:  512, 1024, 2048, 4096, 8192, 16384, 32768, 65536

Compression:   on, off

Running the tool and getting some output is actually simpler than I expected:


./pict mymodels/example1.pict >myoutputs/example1.txt

PICT prints the list of generated test value combinations to the standard output. Which generally just translates to printing a bunch of lines on the console/screen. To save the generated values, I just pipe the output to myoutputs/example1.txt, as shown above. In this case, the output looks like this:


Type	Size	Format method	File system	Cluster size	Compression

Stripe	100	quick	FAT32	1024	on

Logical	10000	slow	NTFS	512	off

Primary	500	quick	FAT	65536	off

Span	10000	slow	FAT	16384	on

Logical	40000	quick	FAT32	16384	off

Span	1000	quick	NTFS	512	on

Span	10	slow	FAT32	32768	off

Stripe	5000	slow	NTFS	32768	on

RAID-5	500	slow	FAT	32768	on

Mirror	1000	quick	FAT	32768	off

Single	10	quick	NTFS	4096	on

RAID-5	100	slow	FAT32	4096	off

Mirror	100	slow	NTFS	65536	on

RAID-5	40000	quick	NTFS	2048	on

Stripe	5000	quick	FAT	4096	off

Primary	40000	slow	FAT	8192	on

Mirror	10	quick	FAT32	8192	off

Span	500	slow	FAT	1024	off

Single	1000	slow	FAT32	2048	off

Stripe	500	quick	NTFS	16384	on

Logical	10	quick	FAT	2048	on

Stripe	10000	quick	FAT32	512	off

Mirror	500	quick	FAT32	2048	on

Primary	10	slow	FAT32	16384	on

Single	10	quick	FAT	512	off

Single	10000	quick	FAT32	65536	off

Primary	40000	quick	NTFS	32768	on

Single	100	quick	FAT	8192	on

Span	5000	slow	FAT32	2048	on

Single	5000	quick	NTFS	16384	off

Logical	500	quick	NTFS	8192	off

RAID-5	5000	quick	NTFS	1024	on

Primary	1000	slow	FAT	1024	on

RAID-5	10000	slow	NTFS	8192	on

Logical	100	quick	NTFS	32768	off

Primary	10000	slow	FAT	32768	on

Stripe	40000	quick	FAT32	65536	on

Span	40000	quick	FAT	4096	on

Stripe	1000	quick	FAT	8192	off

Logical	1000	slow	FAT	4096	off

Primary	100	quick	FAT	2048	off

Single	40000	quick	FAT	1024	off

RAID-5	1000	quick	FAT	16384	on

Single	500	quick	FAT32	512	off

Stripe	10	quick	NTFS	2048	off

Primary	100	quick	NTFS	512	off

Logical	10000	slow	NTFS	1024	off

Mirror	5000	quick	FAT	512	on

Logical	5000	slow	NTFS	65536	off

Mirror	10000	slow	FAT	2048	off

RAID-5	10	slow	FAT32	65536	off

Span	100	quick	FAT	65536	on

Single	5000	quick	FAT	32768	on

Span	1000	quick	NTFS	65536	off

Primary	500	slow	FAT32	4096	off

Mirror	40000	slow	FAT32	4096	off

Mirror	10	slow	FAT32	1024	off

Logical	10000	quick	FAT	4096	off

Span	5000	slow	FAT	8192	off

RAID-5	40000	quick	FAT32	512	on

Primary	5000	quick	NTFS	1024	off

Mirror	100	slow	FAT32	16384	off

The first line is the header, and values/columns are separated by tabulator characters (tabs).

The output above is 62 generated combinations/test cases as evidenced by:


wc -l myoutputs/example1.txt

      63 myoutputs/example1.txt

(wc-l counts lines, and the first line is the header so I substract 1)

To produce all 3-way combinations with PICT, the syntax is:


./pict mymodels/example1.pict >myoutputs/example1.txt /o:3

which generates 392 combinations/test cases:


wc -l myoutputs/example1.txt

      393 myoutputs/example1.txt

I find the PICT command-line syntax a bit odd, as parameters have to be the last elements on the line, and they are identified by these strange symbols like “/o:”. But it works, so great.

Constraints

Of course, not all combinations are always valid. So PICT has extensive support to define constraints on the generator model, to limit what kind of combinations PICT generates. The PICT documentation page has lots of good examples. This part actually seems nicely documented. But let’s try a few just to see what happens. The basic example from the page:


Type:           Primary, Logical, Single, Span, Stripe, Mirror, RAID-5

Size:           10, 100, 500, 1000, 5000, 10000, 40000

Format method:  quick, slow

File system:    FAT, FAT32, NTFS

Cluster size:   512, 1024, 2048, 4096, 8192, 16384, 32768, 65536

Compression:    on, off

IF [File system] = "FAT"   THEN [Size] <= 4096;

IF [File system] = "FAT32" THEN [Size] myoutputs/example2.txt

wc -l myoutputs/example2.txt

      63 myoutputs/example2.txt

So the same number of tests. The contents:


Type	Size	Format method	File system	Cluster size	Compression

Stripe	500	slow	NTFS	1024	on

Primary	500	quick	FAT32	512	off

Single	10	slow	FAT	1024	off

Single	5000	quick	FAT32	32768	on

Span	40000	quick	NTFS	16384	off

Mirror	40000	slow	NTFS	512	on

RAID-5	100	quick	FAT	8192	on

Logical	500	slow	FAT	2048	off

Span	10000	slow	FAT32	1024	on

Logical	1000	slow	FAT32	16384	on

Span	1000	quick	FAT	512	off

Primary	10	quick	NTFS	1024	on

Mirror	1000	quick	NTFS	4096	off

RAID-5	40000	slow	NTFS	1024	off

Single	40000	slow	NTFS	8192	off

Stripe	10	slow	FAT32	4096	on

Stripe	40000	quick	NTFS	2048	on

Primary	100	slow	NTFS	32768	off

Stripe	500	quick	FAT	16384	off

RAID-5	1000	quick	FAT32	2048	off

Mirror	10	quick	FAT	65536	off

Logical	40000	quick	NTFS	4096	on

RAID-5	5000	slow	NTFS	512	off

Stripe	5000	slow	FAT32	65536	on

Span	10	quick	FAT32	2048	off

Logical	10000	quick	NTFS	65536	off

Primary	1000	slow	FAT	65536	off

Mirror	500	quick	FAT	32768	on

Single	100	quick	FAT32	512	on

Mirror	5000	slow	FAT32	2048	on

Mirror	100	quick	NTFS	2048	on

Logical	5000	quick	FAT32	8192	off

Logical	100	slow	FAT32	1024	on

Primary	100	quick	FAT32	16384	off

Primary	10000	quick	FAT32	2048	on

RAID-5	10	slow	FAT	32768	off

Mirror	10	quick	FAT	16384	on

Single	500	slow	FAT	4096	on

Span	500	slow	FAT32	8192	on

Stripe	10000	quick	FAT32	32768	off

Logical	1000	slow	NTFS	32768	on

Single	10000	slow	NTFS	16384	off

Span	100	slow	FAT32	4096	on

Stripe	1000	slow	NTFS	8192	on

Span	5000	quick	NTFS	32768	on

Primary	5000	slow	FAT32	4096	off

RAID-5	100	slow	FAT	65536	off

RAID-5	10000	slow	FAT32	4096	on

Single	1000	quick	FAT	1024	on

Mirror	10	quick	FAT	1024	on

Logical	5000	slow	FAT32	1024	off

Single	500	slow	FAT32	65536	off

Logical	10	quick	NTFS	512	on

Single	1000	slow	FAT	2048	off

Mirror	10000	quick	NTFS	8192	on

Primary	10	quick	FAT32	8192	on

Primary	40000	slow	NTFS	32768	off

Stripe	100	slow	FAT	512	off

Mirror	10000	slow	FAT32	512	on

RAID-5	5000	quick	NTFS	16384	off

Span	40000	quick	NTFS	65536	on

RAID-5	500	quick	FAT	4096	on

In the “size” column vs the “File system” column, the “FAT” file system type now always has a size smaller than 4096. So it works as expected. I have to admit, I found the value 4096 very confusing here, since there is no option of 4096 in the input model for “size” but there is for “Cluster size”. I was looking at the wrong column initially, wondering why the constraint was not working. But it works, just a bit confusing example.

Similarly, 3-way combinations produce the same number of tests (as it did without any constraints) even with these constraints:


./pict mymodels/example2.pict >myoutputs/example2.txt /o:3

wc -l myoutputs/example2.txt

     393 myoutputs/example2.txt

To experiment a bit more, I set a limit on FAT size to be 100 or less:


Type:           Primary, Logical, Single, Span, Stripe, Mirror, RAID-5

Size:           10, 100, 500, 1000, 5000, 10000, 40000

Format method:  quick, slow

File system:    FAT, FAT32, NTFS

Cluster size:   512, 1024, 2048, 4096, 8192, 16384, 32768, 65536

Compression:    on, off

IF [File system] = "FAT"   THEN [Size] <= 100;

IF [File system] = "FAT32" THEN [Size] myoutputs/example3.txt

wc -l myoutputs/example3.txt

      62 myoutputs/example3.txt

./pict mymodels/example3.pict >myoutputs/example3.txt /o:3

wc -l myoutputs/example3.txt

     397 myoutputs/example3.txt

What happened here?

Running the 2-way generator produces 61 tests. So the number of combinations generated was finally reduced by one with the additional constraint.

Running the 3-way generator produces 396 tests. So the number of tests/combinations generated was increased by 4, comparated to 3-way generator without this constraint. Which is odd, as I would expect the number of tests to go down, when there are fewer options. In fact, you could get a smaller number of tests by just by taking the 392 tests from the previous generator run with fewer constraints. Then take every line with “FAT” for “File system”, and if the “Size” for those is bigger than 100, replace it with either 100 or 10. This would be a max of 392 as it was before.

My guess is this is because building the set of inputs to cover all requested combinations is a very hard problem. I believe in computer science this would be called an NP-hard problem (or so I gather from the academic literature for combinatorial testing, even if they seem to call the test set a “covering array” and other academic tricks). So no solution is known that would produce the optimal result. The generator will then have to accomodate all the possible constraints in its code, and ends up taking some tricks here that result in slighly bigger set. It is still likely a very nicely optimized set. Glad it’s not me having to write those algorithms :). I just use them and complain :).

PICT has a bunch of other ways to define conditional constraints with the use of IF, THEN, ELSE, NOT, OR, AND statements. The docs cover that nicely. So lets not go there.

The Naming Trick

Something I found interesting is a way to build models by naming different items separately, and constraining them separately:


#

# Machine 1

#

OS_1:   Win7, Win8, Win10

SKU_1:  Home, Pro

LANG_1: English, Spanish, Chinese

#

# Machine 2

#

OS_2:   Win7, Win8, Win10

SKU_2:  Home, Pro

LANG_2: English, Spanish, Chinese, Hindi

IF [LANG_1] = [LANG_2]

THEN [OS_1] <> [OS_2] AND [SKU_1] <> [SKU_2];

Here we have two items (“machines”) with the same three properties (“OS”, “SKU”, “LANG”). However, by numbering the properties, the generator sees them as different. From this, the generator can now build combinations of different two-machine configurations, using just the basic syntax and no need to tweak the generator itself. The only difference between the two is that “Machine 2” can have one additional language (“Hindi”).

The constraint at the end also nicely ensures that if the generated configurations have the same language, the OS and SKU should be different.

Scaling these “machine” combinations to a large number of machines would require a different type of an approach. Since it is doubtful anyone would like to write a model with 100 machines, each separately labeled. No idea what modelling approach would be the best for that, but right now I don’t have a big requirement for it, so not going there. Maybe a different approach of having the generator produce a more abstract set of combinations, and map those to large number of “machines” somehow.

Repetition and Value References

There is quite a bit of repetition in the above model with both machines repeating all the same parameter values. PICT has a way to address this by referencing values defined for other parameters:


#

# Machine 1

#

OS_1:   Win7, Win8, Win10

SKU_1:  Home, Pro

LANG_1: English, Spanish, Chinese

#

# Machine 2

#

OS_2: <OS_1>

SKU_2: <SKU_1>

LANG_2: <LANG_1>, Hindi

So in this case, “machine 2” is repeating the values from “machine 1”, and changing them in “machine 1” also changes them in “machine 2”. Sometimes that is good, other times maybe not. Because changing one thing would change many, and you might not remember that every time. On the other hand, you would not want to be manually updating all items with the same info every time. But a nice feature to have if you need it.

Data Types

With regards to variable types, PICT supports numbers and strings. So this is given as an example model:


Size:  1, 2, 3, 4, 5

Value: a, b, c, d

IF [Size] > 3 THEN [Value] > "b";

I guess the two types are because you can then define different types of constraints on them. For example, “Size” > 3 makes sense. The part of “value” > 3 a bit less.. So let’s try that:


./pict mymodels/example4.pict >myoutputs/example4.txt

wc -l myoutputs/example4.txt

      17 myoutputs/example4.txt

The output looks like this:


Size	Value

3	a

2	c

1	c

2	b

2	a

1	d

1	a

3	b

4	d

2	d

3	d

1	b

5	c

3	c

4	c

5	d

And here, if “Size” equals 4 or 5 (so is >3), “Value” is always “c” or d”. The PICT docs state “String comparison is lexicographical and case-insensitive by default”. So [> “b”] just refers to letters coming after “b”, which equals “c” and “d” in the choices in this model. It seems a bit odd to define such comparisons against text in a model, but I guess it can help make a model more readable if you can represent values as numbers or strings, and define constraints on them in a similar way.

To verify, I try a slightly modified model:


./pict mymodels/example4.pict >myoutputs/example4.txt

wc -l myoutputs/example4.txt

      13 myoutputs/example4.txt

So, the number of tests is reduced from 16 to 12. Results in the following output:


Size	Value

5	c

2	c

1	d

4	d

1	b

4	c

3	d

3	c

2	d

1	c

1	a

5	d

Which confirms that lines (tests) with Size > 2 now have only letters “c” or “d” in them. This naturally also limits the number of available combinations, hence the reduced test set.

Extra Features

There are some nice features that are nicely explained in the PICT docs:

  • Submodels: Refers to defining levels of combinations per test. For example, 2-way combinations of OS with all others, and 3-way combination of File System Type with all others, at the same time.
  • Aliasing: You can give the same parameter several names and all are treated the same. Not sure why you want to do that but anyway.
  • Weighting: Since the full set of combinations will have more of some values anyway, this can be used to set preference for specific ones.‘

Negative Testing / Erronous Values

A few more interesting ones are “negative testing” and “seeding”. So first negative testing. Negative testing refers to having a set of exclusive values. So those values should never appear together. This is because each of them is expected to produce an error. So you want to make sure the error they produce is visible and not “masked” (hidden) by some other erronous value.

The example model from PICT docs, with a small modification to name the invalid values differently:


#

# Trivial model for SumSquareRoots

#

A: ~-1, 0, 1, 2

B: ~-2, 0, 1, 2

Running it, we get:


./pict mymodels/example5.pict >myoutputs/example5.txt

wc -l myoutputs/example5.txt

      16 myoutputs/example5.txt


A	B

0	2

0	1

1	2

2	1

1	0

2	0

1	1

2	2

0	0

0	~-2

1	~-2

~-1	0

~-1	1

2	~-2

~-1	2

The negative value is prefixed with “~”, and the results show combinations of the two negative values with all possible values of the other variable. So if A is -1, it is combined with 0, 1, 2 for B. If B is -2 it is combinted with 0, 1, 2 for A. But -1 and -2 are never paired. To avoid one “faulty” variable masking the other one. I find having the “~” added everywhere a bit distracting. But I guess you could parse around it, not a real issue.

Of course, there is nothing to stop us from setting the set of possible values to include -1 and -2, and get combinations of several “negative” values. Lets try:


A: -1, 0, 1, 2

B: -2, 0, 1, 2


./pict mymodels/example6.pict >myoutputs/example6.txt

wc -l myoutputs/example6.txt

      17 myoutputs/example6.txt


A	B

1	-2

2	0

1	0

-1	0

0	-2

2	1

-1	-2

0	0

1	2

-1	2

0	2

2	-2

1	1

-1	1

0	1

2	2

So there we go. This produced one test more than the previous one. And that would be the one where both the negatives are present. Line with “-1” and “-2” together.

Overall, the “~” notation seems like just a way to avoid having a set of variables appear together. Convenient, and good way to optimize more when you have large models, big input spaces, slow tests, difficult problem reports, etc.

Seeding / Forcing Tests In

Seeding. When I hear seeding in test generation, I think about the seed value for a random number generator. Because often those are used to help generate tests.. Well, with PICT it actually means you can predine a set of combinations that need to be a part of the final test set.

So lets try with the first example model from above:


Type:          Primary, Logical, Single, Span, Stripe, Mirror, RAID-5

Size:          10, 100, 500, 1000, 5000, 10000, 40000

Format method: quick, slow

File system:   FAT, FAT32, NTFS

Cluster size:  512, 1024, 2048, 4096, 8192, 16384, 32768, 65536

Compression:   on, off

The seed files should be the same format as the output produced by PICT. Lets say I want to try all types with all file systems, using smallest size. So I try with this:


Type	Size	Format method	File system	Cluster size	Compression

Primary	10		FAT32		on

Logical	10		FAT32		on

Single	10		FAT32		on

Span	10		FAT32		on

Stripe	10		FAT32		on

Mirror	10		FAT32		on

RAID-5	10		FAT32		on

Primary	10		FAT		on

Logical	10		FAT		on

Single	10		FAT		on

Span	10		FAT		on

Stripe	10		FAT		on

Mirror	10		FAT		on

RAID-5	10		FAT		on

Primary	10		NTFS		on

Logical	10		NTFS		on

Single	10		NTFS		on

Span	10		NTFS		on

Stripe	10		NTFS		on

Mirror	10		NTFS		on

RAID-5	10		NTFS		on

To run it:


./pict mymodels/example7.pict /e:mymodels/example7.seed >myoutputs/example7.txt

 wc -l myoutputs/example7.txt

      73 myoutputs/example7.txt

So in the beginning of this post, the initial model generated 62 combinations. With this seed file, some forced repetition is there and the size goes up to 72. Still not that much bigger, but I guess shows something about how nice it is to have a combinatorial test tool to optimize this type of test set for you.

The actual output:


Type	Size	Format method	File system	Cluster size	Compression

Primary	10	quick	FAT32	2048	on

Logical	10	slow	FAT32	16384	on

Single	10	slow	FAT32	65536	on

Span	10	quick	FAT32	1024	on

Stripe	10	quick	FAT32	8192	on

Mirror	10	quick	FAT32	512	on

RAID-5	10	slow	FAT32	32768	on

Primary	10	slow	FAT	4096	on

Logical	10	quick	FAT	1024	on

Single	10	quick	FAT	32768	on

Span	10	slow	FAT	512	on

Stripe	10	slow	FAT	16384	on

Mirror	10	slow	FAT	8192	on

RAID-5	10	slow	FAT	2048	on

Primary	10	quick	NTFS	65536	on

Logical	10	quick	NTFS	4096	on

Single	10	slow	NTFS	16384	on

Span	10	quick	NTFS	32768	on

Stripe	10	slow	NTFS	1024	on

Mirror	10	slow	NTFS	2048	on

RAID-5	10	quick	NTFS	512	on

Span	40000	slow	FAT	65536	off

Single	5000	quick	NTFS	8192	off

Mirror	1000	quick	FAT32	4096	off

Stripe	100	slow	FAT	32768	off

Primary	500	slow	FAT	512	off

Primary	40000	quick	NTFS	8192	on

Logical	10000	quick	NTFS	32768	off

RAID-5	40000	slow	FAT32	1024	off

Span	100	quick	NTFS	8192	on

Mirror	10000	slow	FAT32	16384	off

Logical	5000	slow	FAT	512	on

Primary	1000	slow	FAT	1024	on

Mirror	5000	quick	FAT32	1024	on

Logical	1000	quick	NTFS	32768	on

Single	40000	slow	FAT32	512	on

Stripe	40000	quick	FAT	16384	on

Logical	100	quick	FAT32	2048	off

Single	100	quick	FAT32	1024	off

Primary	5000	quick	NTFS	32768	off

Single	40000	slow	NTFS	2048	on

Logical	500	quick	FAT32	8192	on

Single	500	slow	NTFS	4096	on

Span	500	quick	FAT32	16384	on

Primary	100	quick	FAT32	512	off

Stripe	1000	slow	FAT32	2048	on

RAID-5	10000	quick	FAT	8192	on

Stripe	10000	slow	NTFS	512	off

Stripe	5000	quick	FAT	65536	on

Mirror	40000	slow	NTFS	32768	on

Primary	10000	quick	NTFS	1024	on

RAID-5	100	quick	FAT	16384	off

Mirror	500	quick	NTFS	1024	on

Single	1000	slow	FAT32	512	on

Span	100	slow	FAT32	4096	off

Span	5000	slow	NTFS	2048	on

RAID-5	40000	slow	FAT	4096	off

Span	1000	slow	FAT32	16384	on

Mirror	100	quick	FAT	65536	on

Single	10000	slow	FAT	4096	off

RAID-5	1000	slow	NTFS	65536	off

Span	10000	slow	NTFS	65536	on

Span	1000	slow	FAT32	8192	off

RAID-5	500	quick	NTFS	32768	off

Stripe	500	slow	FAT	2048	off

RAID-5	5000	slow	NTFS	16384	on

Stripe	5000	slow	FAT32	4096	off

Logical	10	slow	FAT	65536	off

RAID-5	10000	quick	NTFS	2048	on

Primary	1000	slow	FAT	16384	off

Logical	40000	quick	FAT32	8192	on

Primary	500	quick	FAT	65536	on

This output starts with the seeds given, and PICT has done its best to fill in the blanks with such values as to still minimize the test numbers while meeting the combinatorial coverage requirements.

Personal Thoughts

Searching for PICT and pairwise testing or combinatorial testing brings up a bunch of results and reasonably good articles on the topic. Maybe even more of such practice oriented ones than model-based testing. Maybe because it is simpler to apply, and thus easier to pick up and go in practice?

For example, this has a few good points. One is to use an iterative process to build the input models. So as with everything else, not to expect to get it all perfectly right from the first try. Another is to consider invariants for test oracles. So things that should always hold, such as two nodes in a distributed system never being in a conflicting state when an operation involving both is done. Of course, this would also apply to any other type of testing. The article seems to consider this also from a hierarchical viewpoint, checking the strictest or most critical ones first.

Another good point in that article is to use readable names for the values. I guess sometimes people could use the PICT output as such, to define test configurations and the like for manual testing. I would maybe considering using them more as input for automated test execution to define parameter values to cover. In such cases, it would be enough to give each value a short name such as “A”, “A1”, or “1”. But looking at the model and the output, it would be difficult to define which value would map to which symbol. Readable names are just as parseable for the computer but much more so for the human expert.

Combining with Sequential Models

So this is all nice and shiny, but the examples are actually quite simple test scenarios. There are no complex dependencies between them, not complex state that defines what parameters and values are available, and so on. It mostly seems to vary around what combinations of software or system configurations should be used in testing.

I have worked plenty with model-based testing myself (see OSMO), and actually have talked to some people who have done combinations of combinatorial input generation and model-based testing. I can see how this could be interested, to identify a set of injection points for parameters and values in a MBT model, and use a combinatorial test data generator to build data sets for those injection points. Likely doing some more of this in practice would reveal good insights on what works and what could be done to make the match even better. Maybe someday.

In any case, I am sure combining combinatorial test datasets would also work great with other types of sequences as well. I think this could make a very interesting and practical research topic. Again, maybe someday..

Bye Now

In general, this area seems to have great tools for the basic test generation, but missing some in-depth experiences and guides for how to apply to more complex software. Together with sequential test cases and test generators.

A simpler, yet interesting topic to do would be to integrate the PICT type generator directly with the test environment. Run the combinatorial generator from this during the test runs, and have it randomize the combinations in a bit different ways during different runs. While still maintaining the overall combinatorial coverage.

Finnish Topic Modelling

Previously I wrote about a few experiments I ran with topic-modelling. I briefly glossed over having some results for a set of Finnish text as an example of a smaller dataset. This is a bit deeper look into that..

I use two datasets, the Finnish wikipedia dump, and the city of Oulu board minutes. Same ones I used before. Previously I covered topic modelling more generally, so I won’t go into too much detail here. To summarize, topic modelling algorithms (of which LDA or Latent Dirilect Allocation is used here) find sets of words with different distributions over sets of documents. These are then called the “topics” discussed in those documents.

This post looks at how to use topic models for a different language (besides English) and what could one maybe do with the results.

Lemmatize (turn words into baseforms before use) or not? I choose to lemmatize for topic modelling. This seems to be the general consensus when looking up info on topic modelling, and in my experience it just gives better results as the same word appears only once. I covered POS tagging previously, and I believe it would be useful to apply here as well, but I don’t. Mostly because it is not needed to test these concepts, and I find the results are good enough without adding POS tagging to the mix (which has its issues as I discussed before). Simplicity is nice.

I used the Python Gensim package for building the topic models. As input, I used the Finnish Wikipedia text and the city of Oulu board minutes texts. I used my existing text extractor and lemmatizer for these (to get the raw text out of the HTML pages and PDF docs, and to baseform them, as discussed in my previous posts). I dumped the lemmatized raw text into files using slight modifications of my previous Java code and the read the docs from those files as input to Gensim in a Python script.

I started with the Finnish Wikipedia dump, using Gensim to provide 50 topics, with 1 pass over the corpus. First 10 topics that I got:

  • topic0=focus[19565] var[8893] liivi[7391] luku[6072] html[5451] murre[3868] verkkoversio[3657] alku[3313] joten[2734] http[2685]
  • topic1=viro[63337] substantiivi[20786] gen[19396] part[14778] taivutus[13692] tyyppi[6592] täysi[5804] taivutustyyppi[5356] liite[4270] rakenne[3227]
  • topic2=isku[27195] pieni[10315] tms[7445] aine[5807] väri[5716] raha[4629] suuri[4383] helppo[4324] saattaa[4044] heprea[3129]
  • topic3=suomi[89106] suku[84950] substantiivi[70654] pudottaa[59703] kasvi[46085] käännös[37875] luokka[35566] sana[33868] kieli[32850] taivutusmuoto[32067]
  • topic4=ohjaus[129425] white[9304] off[8670] black[6825] red[5066] sotilas[4893] fraasi[4835] yellow[3943] perinteinen[3744] flycatcher[3735]
  • topic5=lati[48738] eesti[25987] www[17839] http[17073] keele[15733] eki[12421] lähde[11306] dict[11104] sõnaraamat[10648] tallinn[8504]
  • topic6=suomi[534914] käännös[292690] substantiivi[273243] aihe[256126] muualla[254788] sana[194213] liittyvä[193298] etymologi[164158] viite[104417] kieli[102489]
  • topic7=italia[66367] substantiivi[52038] japani[27988] inarinsaame[9464] kohta[7433] yhteys[7071] vaatekappale[5553] rinnakkaismuoto[5469] taas[4986] voimakas[3912]
  • topic8=sana[548232] liittyvä[493888] substantiivi[298421] ruotsi[164717] synonyymi[118244] alas[75430] etymologi[64170] liikuttaa[38058] johdos[25603] yhdyssana[24943]
  • topic9=juuri[3794] des[3209] jumala[1799] tadžikki[1686] tuntea[1639] tekijä[1526] tulo[1523] mitta[1337] jatkuva[1329] levy[1197]
  • topic10=törmätä[22942] user[2374] sur[1664] self[1643] hallita[1447] voittaa[1243] piste[1178] data[1118] harjoittaa[939] jstak[886]

The format of the topic list I used here is “topicX=word1[count] word2[count]”, where X is the number of the topic, word1 is the first word in the topic, word2 the second, and so on. The [count] is how many times the word was associated with the topic in different documents. Consider it the strength, weight, or whatever of the word in the topic.

So just a few notes on the above topic list:

  • topic0 = mostly website related terms, interleaved with a few odd ones. Examples of odd ones; “liivi” = vest, “luku” = number/chapter (POS tagging would help differentiate), “murre” = dialect.
  • topic1 = mostly Finnish language related terms. “viro” = estonia = slightly odd to have here. It is the closest related language to Finnish but still..
  • topic3 = another Finnish language reated topic. Odd one here is “kasvi” = plant. Generally this seems to be more related to words and their forms, where as topic1 maybe more about structure and relations.
  • topic5 = estonia related

Overall, I think this would improve given more passes over the corpus to train the model. This would give the algorithm more time and data to refine the model. I only ran it with one pass here since the training for more topics and with more passes started taking days and I did not have the resources to go there.

My guess is also that with more data and more broader concepts (Wikipedia covering pretty much every topic there is..) you would also need more topics that the 50 I used here. However, I had to limit the size due to time and resource constraints. Gensim probably also has more advanced tuning options (e..g, parallel runs) that would benefit the speed. So I tried a few more sizes and passes with the smaller Oulu city board dataset, as it was faster to run.

Some topics for the city of Oulu board minutes, run for 20 topics and 20 passes over the training data:

  • topic0=oulu[2096] kaupunki[1383] kaupunginhallitus[1261] 2013[854] päivämäärä[575] vuosi[446] päätösesitys[423] jäsen[405] hallitus[391] tieto[387]
  • topic1=kunta[52] palvelu[46] asiakaspalvelu[41] yhteinen[38] viranomainen[25] laki[24] valtio[22] myös[20] asiakaspalvelupiste[19] kaupallinen[17]
  • topic2=oulu[126] palvelu[113] kaupunki[113] koulu[89] tukea[87] edistää[71] vuosi[71] osa[64] nuori[63] toiminta[61]
  • topic3=tontti[490] kaupunki[460] oulu[339] asemakaava[249] rakennus[241] kaupunginhallitus[234] päivämäärä[212] yhdyskuntalautakunta[206] muutos[191] alue[179]
  • topic5=kaupunginhallitus[1210] päätös[1074] jäsen[861] oulu[811] kaupunki[681] pöytäkirja[653] klo[429] päivämäärä[409] oikaisuvaatimus[404] matti[316]
  • topic6=000[71] 2012[28] oulu[22] muu[20] tilikausi[16] vuosi[16] yhde[15] kunta[14] 2011[13] 00000[13]
  • topic8=alue[228] asemakaava[96] rakentaa[73] tulla[58] oleva[56] rakennus[55] merkittävä[53] kortteli[53] oulunsalo[50] nykyinen[48]
  • topic10=asiakirjat.ouka.fi[15107] ktwebbin[15105] 2016[7773] eet[7570] pk_asil_tweb.htm?[7551] ktwebscr[7550] dbisa.dll[7550] url=http[7540] doctype[7540] =3&docid[7540]
  • topic11=yhtiö[31] osake[18] osakas[11] energia[10] hallitus[10] 18.11.2013[8] liite[7] lomautus[6] sähkö[6] osakassopimus[5]
  • topic12=13.05.2013[13] perlacon[8] kuntatalousfoorumi[8] =1418[6] meeting_date=21.3.2013[6] =2070[6] meeting_date=28.5.2013[6] =11358[5] meeting_date=3.10.2016[5] -31.8.2015[4]
  • topic13=001[19] oulu[11] 002[5] kaupunki[4] sivu[3] ���[3] palvelu[3] the[3] asua[2] and[2]

Some notes on the topics above:

  • The word “oulu” repeats in most of the topics. This is quite natural as all the documents are from the board of the city of Oulu. Depending on the use case for the topics, it might be useful to add this word to the list of words to be removed in the pre-cleaning phase for the documents before running the topic modelling algorithm. Or it might be useful information, along with the weight of the word inside the topic. Depends.
  • topic0 = generally about the board structure. For example, “kaupunki”=city, “kaupunginhallitus”=city board, “päivämäärä”=date, “päätösesitys”=proposal for decision.
  • topic1 = Mostly city service related words. For example, “kunta” = county, “palvelu” = service, “asiakaspalvelu” = customer service, “myös” = also, so something to add to the cleaners again.
  • topic2 = School related. For example, “koulu” = school, “tukea” = support, … Sharing again common words such as “kaupunki” = city, which may also be considered for removal or not depending on the case.
  • topic3 = City area planning related. For example, “tontti” = plot of land, “asemakaava” = zoning plan, …
  • In general quite good and focused topics here, so I think in general quite a good result. Some exceptions to consider:
  • topic10 = mostly garbage related to HTML formatting and website link structures. still a real topic of course, so nicely identified.. I think something to consider to add to the cleaning list for pre-processing.
  • topic12 = Seems related to some city finance related consultation (perlacon seems to be such as company) and associated event (the forum). With a bunch of meeting dates.
  • topic13 = unclear garbage
  • So in general, I guess reasonably good results but in real applications, several iterations of fine-tuning the words, the topic modelling algorithm parameters, etc. based on the results would be very useful.

So that was the city minutes topics for a smaller set of topics and more passes. What does it look for 100 topics, and how does the number of passes over the corpus affect the larger size? more passes should give the algorithm more time to refine the topics, but smaller datasets might not have so many good topics..

For 100 topics, 1 passes, 10 first topics:

  • topic0=oulu[55] kaupunki[22] 000[20] sivu[14] palvelu[14] alue[13] vuosi[13] muu[11] uusi[11] tavoite[9]
  • topic1=kaupunki[18] oulu[17] jäsen[15] 000[10] kaupunginhallitus[7] kaupunginjohtaja[6] klo[6] muu[5] vuosi[5] takaus[4]
  • topic2=hallitus[158] oulu[151] 25.03.2013[135] kaupunginhallitus[112] jäsen[105] varsinainen[82] tilintarkastaja[79] kaupunki[75] valita[70] yhtiökokousedustaja[50]
  • topic3=kuntalisä[19] oulu[16] palkkatuki[15] kaupunki[14] tervahovi[13] henkilö[12] tukea[12] yritys[10] kaupunginhallitus[10] työtön[9]
  • topic4=koulu[37] oulu[7] sahantie[5] 000[5] äänestyspaikka[4] maikkulan[4] kaupunki[4] kirjasto[4] monitoimitalo[3] kello[3]
  • topic5=oulu[338] kaupunki[204] euro[154] kaupunginhallitus[143] 2013[105] vuosi[96] milj[82] palvelu[77] kunta[71] uusi[64]
  • topic6=000[8] oulu[7] kaupunki[4] vuosi[3] 2012[3] muu[3] kunta[2] muutos[2] 2013[2] sivu[1]
  • topic7=000[5] 26.03.2013[4] oulu[3] 2012[3] kunta[2] vuosi[2] kirjastojärjestelmä[2] muu[1] kaupunki[1] muutos[1]
  • topic8=oulu[471] kaupunki[268] kaupunginhallitus[227] 2013[137] päivämäärä[97] päätös[93] vuosi[71] tieto[67] 000[66] päätösesitys[64]
  • topic9=oulu[5] lomautus[3] 000[3] kaupunki[2] säästötoimenpidevapaa[1] vuosi[1] kunta[1] kaupunginhallitus[1] sivu[1] henkilöstö[1]
  • topic10=oulu[123] kaupunki[82] alue[63] sivu[43] rakennus[42] asemakaava[39] vuosi[38] tontti[38] 2013[35] osa[35]

Without going too much into translating every word, I would say these results are too spread out, so from this, for this dataset, it seems a smaller set of topics would do better. This also seems to be visible in the word counts/strengths in the [square brackets]. The topics with small weights also seem pretty poor topics, while the ones with bigger weights look better (just my opinion of course :)). Maybe something to consider when trying to explore the number of topics etc.

And the same run, this time with 20 passes over the corpus (100 topics and 10 first ones shown):

  • topic0=oulu[138] kaupunki[128] palvelu[123] toiminta[92] kehittää[73] myös[72] tavoite[62] osa[55] vuosi[50] toteuttaa[44]
  • topic1=-seurantatieto[0] 2008-2010[0] =30065[0] =170189[0] =257121[0] =38760[0] =13408[0] oulu[0] 000[0] kaupunki[0]
  • topic2=harmaa[2] tilaajavastuulaki[1] tilaajavastuu.fi[1] torjunta[1] -palvelu[1] talous[0] harmaantalous[0] -30.4.2014[0] hankintayksikkö[0] kilpailu[0]
  • topic3=juhlavuosi[14] 15.45[11] perussopimus[9] reilu[7] kauppa[6] juhlatoimikunta[6] työpaja[6] 24.2.2014[6] 18.48[5] tapahtumatuki[4]
  • topic4=kokous[762] kaupunginhallitus[591] päätös[537] pöytäkirja[536] työjärjestys[362] hyväksyä[362] tarkastaja[360] esityslista[239] valin[188] päätösvaltaisuus[185]
  • topic5=koulu[130] sivistys-[35] suuralue[28] perusopetus[25] tilakeskus[24] kulttuurilautakunta[22] järjestää[22] korvensuora[18] päiväkota[17] päiväkoti[17]
  • topic6=piste[24] hanke[16] toimittaja[12] hankesuunnitelma[12] tila[12] toteuttaa[11] hiukkavaara[10] hyvinvointikeskus[10] tilakeskus[10] monitoimitalo[9]
  • topic7=tiedekeskus[3] museo-[2] prosenttipohjainen[2] taidehankinta[1] uudisrakennushanke[1] hankintamääräraha[1] prosenttitaide[1] hankintaprosessi[0] toteutusajankohta[0] ulosvuokrattava[0]
  • topic8=euro[323] milj[191] vuosi[150] oulu[107] talousarvio[100] tilinpäätös[94] kaupunginhallitus[83] kaupunki[79] 2012[73] 2013[68]
  • topic9=päätös[653] oikaisuvaatimus[335] oulu[295] kaupunki[218] päivä[215] voi[211] kaupunginhallitus[208] posti[187] pöytäkirja[161] viimeinen[154]

Even the smaller topics here seem much better now with the increase in passes over the corpus. So perhaps the real difference just comes from having enough passes over the data, giving the algorithms more time and data to refine the models. At least I would not try without multiple passes based on comparing the results here of 1 vs 20 passes.

For example, topic2 here has small numbers but still all items seem related to grey market economy. Similarly, topic7 has small numbers but the words are mostly related to arts and culture.

So to summarize, it seems lemmatizing your words, exploring your parameters, and ensuring to have a decent amount of data and decent number of passes for the algorithm are all good points. And properly cleaning your data, and iterating over the process many times to get these right (well, as “right”as you can).

To answer my “research questions” from the beginning: topic modelling for different languages and use cases for topic modelling.

First, lemmatize all your data (I prefer it over stemming but it can be more resource intensive). Clean all your data from the typical stopwords for your language, but also for your dataset and domain. Run the models and analysis several times, and keep refining your list of removed words to clean also based on your use case, your dataset and your domain. Also likely need to consider domain specific lemmatization rules as I already discussed with POS tagging.

Secondly, what use cases did I find looking at topic modelling use cases online? Actually, it seems really hard to find concrete actual reports of uses for topic models. Quora has usually been promising but not so much this time. So I looked at reports in the published research papers instead, trying to see if any companies were involved as well.

Some potential use cases from research papers:

Bug localization, as in finding locations of bugs in source code is investigated here. Source code (comments, source code identifiers, etc) is modelled as topics, which are mapped to a query created from a bug report.

Matching duplicates of documents in here. Topic distributions over bug reports are used to suggest duplicate bug reports. Not exact duplicates but describing the same bug. If the topic distributions are close, flag them as potentially discussing the same “topic” (bug).

Ericsson has used topic models to map incoming bug reports to specific components. To make resolving bugs easier and faster by automatically assigning them to (correct) teams for resolution. Large historical datasets of bug reports and their assignments to components are used to learn the topic models. Topic distributions of incoming bug reports are used to give probability rankings for the bug report describing a specific component, in comparison to topic distributions of previous bug reports for that component. Topic distributions are also used as explanatory data to present to the expert looking at the classification results. Later, different approaches are reported at Ericsson as well. So just to remind that topic models are not the answer to everything, even if useful components and worth a try in places.

In cyber security, this uses topic models to describe users activity as distributions over the different topics. Learn topic models from user activity logs, describe each users typical activity as a topic distribution. If a log entry (e.g., session?) diverges too much from this topic distribution for the user, flag it as an anomaly to investigate. I would expect simpler things could work for this as well, but as input for anomaly detection, an interesting thought.

Tweet analysis is popular in NLP. This is an example of high-level tweet topic classification: Politics, sports, science, … Useful input for recommendations etc., I am sure. A more targeted domain specific example is of using topics in Typhoon related tweet analysis and classification: Worried, damage, food, rescue operations, flood, … useful input for situation awareness, I would expect. As far as I understood, topic models were generated, labeled, and then users (or tweets) assigned to the (high-level) topics by topic distributions. Tweets are very small documents, so that is something to consider, as discussed in those papers.

Use of topics models in biomedicine for text analysis. To find patterns (topic distributions) in papers discussing specific genes, for example. Could work more broadly as one tool to explore research in an area, to find clusters of concepts in broad sets of research papers on a specific “topic” (here a research on a specific gene). Of course, there likely exist number of other techniques to investigate for that as well, but topic models could have potential.

Generally labelling and categorizing large number of historical/archival documents to assist users in search. Build topic models, have experts review them, and give the topics labels. Then label your documents based on their topic distributions.

Bit further outside the box, split songs into segments based on their acoustic properties, and use topic modelling to identify different categories/types of music in large song databases. Then explore the popularity of such categories/types over time based on topic distributions over time. So the segments are your words, and the songs are your documents.

Finding image duplicates of images in large data sets. Use image features as words, and images as documents. Build topic models from all the images, and find similar types of images by their topic distributions. Features could be edges, or even abstract ones such as those learned by something like a convolutional neural nets. Assists in image search I guess..

Most of these uses seem to be various types of search assistance, with a few odd ones thinking outside the box. With a decent understanding, and some exploration, I think topic models can be useful in many places. The academics would sayd “dude XYZ would work just as well”. Sure, but if it does the job for me, and is simple and easy to apply..

Word2Vec with some Finnish NLP

To get a better view of the popular Word2Vec algorithm and its applications in different contexts, I ran experiments on Finnish language and Word2vec. Let’s see.

I used two datasets. First one is the traditional Wikipedia dump. I got the Wikipedia dump for the Finnish version from October 20th. Because I ran the first experiments around that time. The seconds dataset was the Board minutes for the City of Oulu for the past few years.

After running my clearning code on the Wikipedia dump it reported 600783 sentences and 6778245 words for the cleaned dump. Cleaning here refers to removing all the extra formatting, HTML tagging, etc. Sentences were tokenized using Voikko. For the Board minutes the similar metrics were 4582 documents, 358711 sentences, and 986523 words. Most interesting, yes?

For running Word2vec I used the Deeplearning4J implementation. You can find the example code I used on Github.

Again I have this question of whether to use lemmatization or not. Do I run the algorithm on baseformed words or just unprocessed words in different forms?

Some prefer to run it after lemmatization, while generally the articles on word2vec say nothing on the topic but rather seem to run it on raw text. This description of a similar algorithm actually shows and example of mapping “frog” to “frogs”, further indicating use of raw text. I guess if you have really lots of data and a language that does not have a huge number of forms for different words that makes more sense. Or if you find relations between forms of words more interesting.

For me, Finnish has so many forms of words (morphologies or whatever they should be called?) and generally I don’t expect to run with hundreds of billions of words of data, so I tried both ways (with and without lemmatization) to see. With my limited data and the properties of the Finnish language I would just go with lemmatization really, but it is always interesting to try and see.

Some results for my experiments:

Wikipedia without lemmatization, looking for the closest words to “auto”, which is Finnish for “car”. Top 10 results along with similarity score:

  • auto vs kuorma = 0.6297630071640015
  • auto vs akselin = 0.5929439067840576
  • auto vs auton = 0.5811734199523926
  • auto vs bussi = 0.5807990431785583
  • auto vs rekka = 0.578578531742096
  • auto vs linja = 0.5748337507247925
  • auto vs työ = 0.562477171421051
  • auto vs autonkuljettaja = 0.5613142848014832
  • auto vs rekkajono = 0.5595266222953796
  • auto vs moottorin = 0.5471497774124146

Words from above translated:

  • kuorma = load
  • akselin = axle’s
  • auton = car’s
  • bussi = bus
  • rekka = truck
  • linja = line
  • työ = work
  • autonkuljettaja = car driver
  • rekkajono = truck queue
  • moottorin = engine’s

A similarity score of 1 would mean a perfect match, and 0 a perfect mismatch. Word2vec builds a model representing position of words in “vector-space”. This is inferred from “word-embeddings”. This sounds fancy, and as usual, it is difficult to find a simple explanation of what is done. I view it a taking typically 100-300 numbers to represent each numbers relation in the “word-space”. These get adjusted by the algorithm as it goes through all the sentences and records each words relation to other words in those sentences. Probably all wrong in that explanation but until someone gives a better one..

To preprocess the documents for word2vec, I split the documents to sentences to give the words a more meaningful context (a sentence vs just any surrounding words). There are other similar techniques, such as Glove, that may work better with more global “context” than a sentence. But anyway this time I was playing with Word2vec, which I think is also interesting for many things. It also has lots of implementations and popularity.

Looking at the results above, there is the word “auton”, translating to “car’s”. Finnish language has a a large number of forms that different words can take. So, sometimes, it may be good to lemmatize to see what the meaning of the word better maps to vs matching forms of words. So I lemmatize with Voikko, the Finnish language lemmatizer again. Re-run of above, top-10:

  • auto vs ajoneuvo = 0.7123048901557922
  • auto vs juna = 0.6993820667266846
  • auto vs rekka = 0.6949941515922546
  • auto vs ajaa = 0.6905277967453003
  • auto vs matkustaja = 0.6886627674102783
  • auto vs tarkoitettu = 0.66249680519104
  • auto vs rakennettu = 0.6570218801498413
  • auto vs kuljetus = 0.6499230861663818
  • auto vs rakennus = 0.6315782070159912
  • auto vs alus = 0.6273047924041748

Meanings of the words in English:

  • ajoneuvo = vehicle
  • juna = train
  • rekka = truck
  • ajaa = drive
  • matkustaja = passenger
  • tarkoitettu = meant
  • rakennettu = built
  • kuljetus = transport
  • rakennus = building
  • alus = ship

So generally these mappings make some sense. Not sure about those building words. Some deeper exploration would probably help..

Some people also came up with the idea of POS tagging before running word2vec. Called it Sense2Vec and whatever. Just so you could better differentiate how different meanings of a word map differently. So to try to POS tag with the tagger I implemented before. Results:

  • auto_N vs juna_N = 0.7195479869842529
  • auto_N vs ajoneuvo_N = 0.6762610077857971
  • auto_N vs alus_N = 0.6689988970756531
  • auto_N vs kone_N = 0.6615594029426575
  • auto_N vs kuorma_N = 0.6477057933807373
  • auto_N vs tie_N = 0.6470917463302612
  • auto_N vs seinä_N = 0.6453390717506409
  • auto_N vs kuljettaja_N = 0.6449363827705383
  • auto_N vs matka_N = 0.6337422728538513
  • auto_N vs pää_N = 0.6313328146934509

Meanings of the words in English:

  • juna = train
  • ajoneuvo = vehicle
  • alus = ship
  • kone = machine
  • kuorma = load
  • tie = road
  • seinä = wall
  • kuljettaja = driver
  • matka = trip
  • pää = head

soo… The weirdest ones here are the wall and head parts. Perhaps again a deeper exploration would tell more. The rest seem to make some sense just by looking.

And to do the same for the City of Oulu Board minutes. Now looking for a specific word for the domain. The word being “serviisi”, which is the city office responsible for food production for different facilities and schools. This time lemmatization was applied for all results. Results:

  • serviisi vs tietotekniikka = 0.7979459762573242
  • serviisi vs työterveys = 0.7201094031333923
  • serviisi vs pelastusliikelaitos = 0.6803742051124573
  • serviisi vs kehittämisvisio = 0.678106427192688
  • serviisi vs liikel = 0.6737961769104004
  • serviisi vs jätehuolto = 0.6682301163673401
  • serviisi vs serviisin = 0.6641604900360107
  • serviisi vs konttori = 0.6479293704032898
  • serviisi vs efekto = 0.6455909013748169
  • serviisi vs atksla = 0.6436249017715454

because “serviisi” is a very domain specific word/name here, the general purpose Finnish lemmatization does not work for it. This is why “serviisin” is there again. To fix this, I added this and some other basic forms of the word to the list of custom spellings recognized by my lemmatizer tool. That is, using Voikko but if not found trying a lookup in a custom list. And if still not found, writing a list of all unrecognized words sorted by highest frequency first (to allow augmenting the custom list more effectively).

Results after change:

  • serviisi vs tietotekniikka = 0.8719592094421387
  • serviisi vs työterveys = 0.7782909870147705
  • serviisi vs johtokunta = 0.695137619972229
  • serviisi vs liikelaitos = 0.6921887397766113
  • serviisi vs 19.6.213 = 0.6853622794151306
  • serviisi vs tilakeskus = 0.673351526260376
  • serviisi vs jätehuolto = 0.6718368530273438
  • serviisi vs pelastusliikelaitos = 0.6589146852493286
  • serviisi vs oulu-koilismaan = 0.6495324969291687
  • serviisi vs bid=2300 = 0.6414187550544739

Or another run:

  • serviisi vs tietotekniikka = 0.864517867565155
  • serviisi vs työterveys = 0.7482070326805115
  • serviisi vs pelastusliikelaitos = 0.7050554156303406
  • serviisi vs liikelaitos = 0.6591876149177551
  • serviisi vs oulu-koillismaa = 0.6580390334129333
  • serviisi vs bid=2300 = 0.6545186638832092
  • serviisi vs bid=2379 = 0.6458192467689514
  • serviisi vs johtokunta = 0.6431671380996704
  • serviisi vs rakennusomaisuus = 0.6401894092559814
  • serviisi vs tilakeskus = 0.6375274062156677

So what are all these?

  • tietotekniikka = city office for ICT
  • työterveys = occupational health services
  • liikelaitos = company
  • johtokunta = board (of directors)
  • konttori = office
  • tilakeskus = space center
  • pelastusliikelaitos = emergency office
  • energia = energy
  • oulu-koilismaan = name of area surrounding the city
  • bid=2300 is an identier for one of the Serviisi board meeting minutes main pages.
  • 19.6.213 seems to be a typoed date and could at least be found in one of the documents listing decisions by different city boards.

So almost all of these words that “serviisi” is found to be closest to are other city offices/companies responsible for different aspects of the city. Such as ICT, energy, office space, emergency response, of occupation health. Makes sense.

OK, so much for the experimental runs. I should summarize something about this.

The wikipedia results seem to give slightly better results in terms of the words it suggests being valid words. For the city board minutes I should probably filter more based on presence of special characters and numbers. Maybe this is the case for larger datasets vs smaller ones, where the “garbage” more easily drowns in the larger sea of data. Don’t know.

The word2vec algorithm also has a set of parameters to tune, which probably would be worth more investigation to get more optimized results for these different types of datasets. I simply used the same settings for both the city minutes and Wikipedia. Yet due to size differences, likely it would be interesting to play at least with the size of the vector space. For example, bigger datasets might benefit more from having a bigger vector space, which should enable them to express richer relations between different words. For smaller sets, a smaller space might be better. Similarly, number of processing iterations, minimum word frequencies etc should be tried a bit more. For me the goal here was to get a general idea on how this works and how to use it with Finnish datasets. For this, these experiments are enough.

If you read up on any articles of Word2Vec you will likely also see the hype on the ability to do equations such as “king – man + woman” = “queen”. These are from training on large English corpuses. It simply says that the relation of the word “queen” to word “woman” in sentences is typically the same as the relation of the word “king” to “man”. But then this is often the only or one of very few examples ever. Looking at the city minutes example here, since “serviisi” seems to map closest to all the other offices/companies of the city, what do we get if we run the arithmatic on “serviisi-liikelaitos” (so liikelaitos would be the common concept of the office/company). I got things like “city traffic”, “reduce”, “children home”, “citizen specific”, “greenhouse gas”. Not really useful. So this seems most useful as a potential tool for exploration but cannot really say which part gives useful results when. But of course, it is nice to report on the interesting abstractions it finds, not on boring fails.

I think lemmatization in these cases I showed here makes sense. I have no interest in just knowing that a singular form of a word is related to a plural form of the same word. But I guess in some use cases that could be valid. Of course, for proper lemmatization you might also wish to first do POS tagging to be able to choose the correct baseforms from all the options presented. In this case I just took the first baseform from the list Voikko gives for each word.

Tokenization could also be of more interest. Finnish language has a lot of compound words, some of which are visible in the above examples. For example, “kuorma-auto”, and “linja-auto” for the wikipedia example. Or the different “liikelaitos” combinations for the city of Oulu version. Further n-grams (combinations of words) would be useful to investigate further. For example, “energia” in the city example could easily be related to the city power company called “Oulun Energia”. Many similar examples likely can be found all over any language and domain vocabulary.

Further custom spelling would also be useful. For example, “oulu-koilismaan” above could be spelled as “oulu-koillismaan”. And it could further be baseformed with other forms of itself as “oulu-koillismaa”. Collecting these from the unrecognized words should make this relatively easy, and filtering out the low-frequency occurrences of the words.

So perhaps the most interesting question, What is this good for?

Not synonym search. Somehow over time I got the idea word2vec could give you some kind of synonums and stuffs. Clearly it is not for that but rather to identify words over similar concepts and the like.

So generally I can see it could be useful for exploring related concepts in documents. Or generally exploring datasets and building concept maps, search definitions, etc. More as an input to the human export work rather than fully automated as the results vary quite a bit.

Some interesting applications I found while looking at this:

  • Word2vec in Google type search, as well as search in general.
  • Exploring associations between medical terms. Perhaps helpful identify new links you did not think of before? Likely would match other similar domains as well.
  • Mapping words in different languages together.
  • Spotify mapping similar songs together via treating songs as words and playlists as sentences.
  • Someone tried it on sentiment analysis. Not really sure how useful that was as I just skimmed the article but in general I can see how it could be useful to find different types of words related to sentiments. As before, not necessarily as automated input but rather as input to an expert to build more detailed models.
  • Using the similarity score weights as means to find different topics. Maybe you could combine this with topic modelling and the look for diversity of topics?
  • Product recommendations by using products as words and sequences of purchases as sentences. Not sure how big is the meaning of purchase order but interesting idea.
  • Bet recommendations by modelling bets made by users as bet targets being words and sequences of bets sentences, finding similarities with other bets to recommend.

So that was mostly that. Similar tools exist for many platforms, whatever gives you the kicks. For example, Voikko has some python module on github to use and Gensim is a nice tool for many NLP processing tasks, including Word2Vec on python.

Also lots of datasets, especially for the English language, to use as pretrained word2vec models. For example, Facebooks FastText, Stanfords Glove datasets, Google news corpus from here. Anyway, some simple internet searches should turn out many such to use, which I think is useful for general purpose results. For more detailed domain specific ones training is good as I did here for the city minutes..

Many tools can also take in word vector models built with some other tool. For example, deeplearning4j mentions import of Glove models and Gensim lists support for FastText, VarEmbed and WordRank. So once you have some good idea of what such models can do and how to use them, building combinations of these is probably not too hard.