-
Notifications
You must be signed in to change notification settings - Fork 0
Comparing changes
Open a pull request
base repository: rockydev/spark-csv
base: master
head repository: databricks/spark-csv
compare: master
- 8 commits
- 16 files changed
- 7 contributors
Commits on Jul 12, 2016
-
Fix for #308 write using specified dateFormat
Added support for writing to file with dateFormat that is specified in the options. Added some basic tests for it. I'm continuing to use SimpleDateFormat since that is what is used for reading. I would prefer joda's DateTimeFormat, but that can be a separate issue. It's too bad CSVFormat does not allow specification of a dateFormat as it does for the null value. That would make this much easier. Author: barrybecker <barrybecker@sgi.com> Closes databricks#310 from barrybecker4/master.
Configuration menu - View commit details
-
Copy full SHA for 6290b6c - Browse repository at this point
Copy the full SHA 6290b6cView commit details
Commits on Jul 26, 2016
-
fix NPE that can occur for null dates
I know databricks#310 was recently merged and closed, but I just made an additional change to fix a problem I noted when null dates are in the data. A check for null needs to be done. Let me know if I need to create a separate issue or if you can just merge this. Author: barrybecker <barrybecker@sgi.com> Author: Barry Becker <BBE@esi-group.com> Closes databricks#360 from barrybecker4/master.
Configuration menu - View commit details
-
Copy full SHA for 1fd65ad - Browse repository at this point
Copy the full SHA 1fd65adView commit details
Commits on Sep 5, 2016
-
Changed whitesspaces to white space Author: Mayank-Shete <mayank.shete@gmail.com> Closes databricks#331 from Mayank-Shete/master.
Configuration menu - View commit details
-
Copy full SHA for 1aebd64 - Browse repository at this point
Copy the full SHA 1aebd64View commit details -
Allow for maxCharsPerCol to be set
Currently, the maxCharactersPerColumn value is hardcoded to 100,000. I (sadly) have CSV fields with more than 100k characters so need to be able to configure this. This allows the value to be passed the same way as other paramters while keeping a backward compatible default of 100k Author: Addison Higham <ahigham@instructure.com> Closes databricks#307 from addisonj/master.
Configuration menu - View commit details
-
Copy full SHA for e3f3c1b - Browse repository at this point
Copy the full SHA e3f3c1bView commit details -
Changes for 1.5 and final release Author: Hossein <hossein@databricks.com> Closes databricks#379 from falaki/changes-for-v1.5.0.
Configuration menu - View commit details
-
Copy full SHA for 1ae6492 - Browse repository at this point
Copy the full SHA 1ae6492View commit details
Commits on Oct 1, 2016
-
Treat nullValue for string as well
https://github.com/databricks/spark-csv/issues/370 This PR fixes `nullValue` handling for `StringType`. It seems it'd be better to treat all the types consistently. Author: hyukjinkwon <gurwls223@gmail.com> Closes databricks#384 from HyukjinKwon/null-string.
Configuration menu - View commit details
-
Copy full SHA for a0b5a27 - Browse repository at this point
Copy the full SHA a0b5a27View commit details
Commits on Jan 7, 2017
-
delete ";" to match style to others
I think it is better to delete ";" to match another scala source styles. Author: chie hayashida <chie8842@gmail.com> Closes databricks#395 from hayashidac/master.
Configuration menu - View commit details
-
Copy full SHA for 4ee6063 - Browse repository at this point
Copy the full SHA 4ee6063View commit details
Commits on Jan 9, 2017
-
Fixed inferring of types under existence of malformed lines
The type inference mechanism takes into account malformed lines, even though the user may have selected to drop them. In my opinion, this leads to faulty results, since the inferred types get distorted by unruly lines that may have more or less columns. Since those lines get dropped anyway during the creation of the dataframe, they shouldn't be taken into account during the type inference as well. Author: Georgios Gkekas <gkekas@gmail.com> Author: Georgios Gkekas <Georgios Gkekas> Author: digigek <gkekas@gmail.com> Closes databricks#394 from digigek/inference-malformed.
Configuration menu - View commit details
-
Copy full SHA for 7997c3c - Browse repository at this point
Copy the full SHA 7997c3cView commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff master...master