The code-behind of my custom component is not being executed when bundling the app with WebPack (tns run ios --bundle --env.snapshot --env.aot). I followed the official documentation but I couldn't get it to work.
Can anyone help me, please?
1 Example project: https://github.com/felipebueno/my-app
2 Custom components documentation: https://docs.nativescript.org/ui/basics#xml-based-custom-component-with-a-code-file
Screenshots for reference:
With webpack (example project. [Update] The name of the component is different but it's being imported properly):
Without webpack:
The thing is that the default webpack configuration (from nativescript-webpack here) will bundle only pages which are named to end with -page or -root. All other custom named pages and resources should be included explicitly n your webpack.config.js in the CopyWebpackPlugin array.
So as a solution rename your custom component name to end with -name postfix and make a clean rebuild. For example, in your case change the file name to profile-icon-page (and use the proper import)
Or add the custom resources as shown below in webpack.config.js file
new CopyWebpackPlugin([
{ from: "shared/components/**" }, // HERE
{ from: "fonts/**" },
{ from: "**/*.jpg" },
{ from: "**/*.png" },
]
Related
What I'm trying to achieve
I'm trying to generate my REST API client for Android using OpenAPI Generator from the build.gradle script. That way, I wouldn't have to run the generator command line every time the specs change. Ideally, this would be generated when I build/assemble my app, and the sources would end up in the java (generated) folder, where generated sources are then accessible from the code (this is what happens with the BuildConfig.java file for example).
What I've tried so far
Following this link from their official GitHub, here's the build.gradle file I ended up with:
apply plugin: 'com.android.application'
apply plugin: 'org.openapi.generator'
...
openApiValidate {
inputSpec = "$rootDir/app/src/main/openapi/my-api.yaml"
recommend = true
}
openApiGenerate {
generatorName = "java"
inputSpec = "$rootDir/app/src/main/openapi/my-api.yaml"
outputDir = "$buildDir/generated/openapi"
groupId = "$project.group"
id = "$project.name-openapi"
version = "$project.version"
apiPackage = "com.example.mypackage.api"
invokerPackage = "com.example.mypackage.invoker"
modelPackage = "com.example.mypackage.model"
configOptions = [
java8 : "true",
dateLibrary : "java8",
library : "retrofit2"
]
}
...
First, I've never managed to get the API generated with the build/assemble task, even when I tried adding:
compileJava.dependsOn tasks.openApiGenerate
or
assemble.dependsOn tasks.openApiGenerate
The only way I could generate the sources was by manually triggering the openApiGenerate task:
Then, when I do generate my sources this way, they end up in the build folder but aren't accessible from my code, and aren't visible in the java (generated) folder:
I then have to manually copy/paste the generated source files to my project sources in order to use the API.
Even though I'm able to work around these issues by adding manual procedures, it would be way more maintainable if the whole process was simply automatic. I was able to achieve a similar result with another tool, Protobuf. Indeed, my gradle task gets triggered every time I build the app, and the sources end up in the java (generated) folder, so I don't have to do any additional work. The task is much simpler though, so I assume the main work that I'm not able to replicate with OpenAPI Generator is handled by the Protobuf plugin itself.
You have to specify path to the generated sources as a custom source set for your Gradle module, which is app in this case, as described here – https://developer.android.com/studio/build/build-variants#configure-sourcesets. That way Gradle will treat your sources as accessible from your code.
Something like this:
android {
...
sourceSets {
main {
java.srcDirs = ['build/generated/openapi/src/main/java']
}
}
...
}
I solved the issue you described like this, I'm using gradle.kts however.
See my build.gradle.kts
plugins {
// Your other plugins
id("org.openapi.generator") version "5.3.0"
}
openApiGenerate {
generatorName.set("kotlin")
inputSpec.set("$rootDir/app/src/main/openapi/my-api.yaml")
outputDir.set("$buildDir/generated/api")
// Your other specification
}
application {
// Your other code
sourceSets {
main {
java {
// TODO: Set this path according to what was generated for you
srcDir("$buildDir/generated/api/src/main/kotlin")
}
}
}
}
tasks.compileKotlin {
dependsOn(tasks.openApiGenerate)
}
You need to build the application at least once for the IDE to detect the library (at least this is the case for me in Intellij)
Your build should automatically generate the open api classes , to refer the generated classes in your java project you should add the generated class path to your source directory like it was mentioned in the other answers
https://developer.android.com/studio/build/build-variants#configure-sourcesets
As far as the task dependency goes , in android tasks are generated after configuration thus for gradle to recognize the task , wrap it inside afterEvaluate block like
afterEvaluate {
tasks.compileDebugJavaWithJavac.dependsOn(tasks.openApiGenerate)
}
I had this issue, and this answer https://stackoverflow.com/a/55646891/14111809 led me to a more informative error:
error: incompatible types: Object cannot be converted to Annotation
#java.lang.Object()
Taking a look at the generated files that were causing this error, noticed:
import com.squareup.moshi.Json;
After including a Moshi in the app build.gradle, the build succeeded and the generated code was accessible.
implementation("com.squareup.moshi:moshi-kotlin:1.13.0")
TL;DR;
How to add two or more kotlin native modules on an iOS project without getting duplicate symbols error?
The detailed question
Let's assume a multi-module KMP project as a follow where there exists a native app for Android and a native app for iOS and two common modules to hold shared kotlin code.
.
├── android
│ └── app
├── common
│ ├── moduleA
│ └── moduleB
├── ios
│ └── app
The module A contains a data class HelloWorld and has no module dependencies:
package hello.world.modulea
data class HelloWorld(
val message: String
)
Module B contains an extension function for HelloWorld class so it depends on module A:
package hello.world.moduleb
import hello.world.modulea.HelloWorld
fun HelloWorld.egassem() = message.reversed()
The build.gradle configuration of the modules are:
Module A
apply plugin: "org.jetbrains.kotlin.multiplatform"
apply plugin: "org.jetbrains.kotlin.native.cocoapods"
…
kotlin {
targets {
jvm("android")
def iosClosure = {
binaries {
framework("moduleA")
}
}
if (System.getenv("SDK_NAME")?.startsWith("iphoneos")) {…}
}
cocoapods {…}
sourceSets {
commonMain.dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib-common:1.3.72"
}
androidMain.dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib:1.3.72"
}
iosMain.dependencies {
}
}
}
Module B
apply plugin: "org.jetbrains.kotlin.multiplatform"
apply plugin: "org.jetbrains.kotlin.native.cocoapods"
…
kotlin {
targets {
jvm("android")
def iosClosure = {
binaries {
framework("moduleB")
}
}
if (System.getenv("SDK_NAME")?.startsWith("iphoneos")) {…}
}
cocoapods {…}
sourceSets {
commonMain.dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib-common:1.3.72"
implementation project(":common:moduleA")
}
androidMain.dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib:1.3.72"
}
iosMain.dependencies {
}
}
}
It looks pretty straightforward and it even works on android if I configure the android build gradle dependencies as a following:
dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk8:1.3.72"
implementation project(":common:moduleA")
implementation project(":common:moduleB")
}
However, this does not seem to be the correct way to organize multi modules on iOS, because running the ./gradlew podspec I get a BUILD SUCCESSFUL as expected with the following pods:
pod 'moduleA', :path => '…/HelloWorld/common/moduleA'
pod 'moduleB', :path => '…/HelloWorld/common/moduleB'
Even running a pod install I get a success output Pod installation complete! There are 2 dependencies from the Podfile and 2 total pods installed. whats looks correctly once the Xcode shows the module A and module B on the Pods section.
However, if I try to build the iOS project I get the following error:
Ld …/Hello_World-…/Build/Products/Debug-iphonesimulator/Hello\ World.app/Hello\ World normal x86_64 (in target 'Hello World' from project 'Hello World')
cd …/HelloWorld/ios/app
…
duplicate symbol '_ktypew:kotlin.Any' in:
…/HelloWorld/common/moduleA/build/cocoapods/framework/moduleA.framework/moduleA(result.o)
…/HelloWorld/common/moduleB/build/cocoapods/framework/moduleB.framework/moduleB(result.o)
… a lot of duplicate symbol more …
duplicate symbol '_kfun:kotlin.throwOnFailure$stdlib#kotlin.Result<#STAR>.()' in:
…/HelloWorld/common/moduleA/build/cocoapods/framework/moduleA.framework/moduleA(result.o)
…/HelloWorld/common/moduleB/build/cocoapods/framework/moduleB.framework/moduleB(result.o)
ld: 9928 duplicate symbols for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
My knowledge in iOS is not that much, so to my untrained eyes, it looks like each module is adding its own version of the things instead of using some resolutions strategy to share it.
If I use only the module A the code works and run as expected, so I know the code itself is correct, the problem is how to manage more than 1 module, so that the question, how to add both (module A and module B) on iOS and make things works?
P.S
I did reduce the code as much as I could, trying to keep only the parts that I guess is the source of the problem, however, the complete code is available here if you want to check anything missing in the snippets, or if you want to run and try to solve the problem…
Multiple Kotlin frameworks can be tricky, but should be working as of 1.3.70 which I see you have.
The issue seems to be that both frameworks are static, which is currently an issue in 1.3.70 so it isn't working. (This should be updated by 1.40). It looks like by default the cocoapods plugin sets the frameworks to be static which won't work. I'm unaware of a way to change cocoapods to set it as dynamic but I've tested building without cocoapods and using the isStatic variable in a gradle task, and have gotten an iOS project to compile. Something like:
binaries {
framework("moduleA"){
isStatic = false
}
}
For now you can work around the issue using this method by using the code above and creating a task to build the frameworks(here's an example)
Another thing worth noting is that on the iOS side, the HelloWorld classes will appear as two separate classes despite both coming from moduleA. It's another strange situation with multiple Kotlin frameworks, but I think the extension will still work in this case since you're returning a string.
I actually just wrote up a blog post about multiple Kotlin frameworks that may help with some other questions if you'd like to take a look. https://touchlab.co/multiple-kotlin-frameworks-in-application/
EDIT: Looks like cocoapodsext also has an isStatic variable, so set it to isStatic = false
tl:dr You currently can't have more than one static Kotlin frameworks in the same iOS project. Set them to not be static using isStatic = false.
However, if I try to build the iOS project I get the following error:
This particular error is a known issue. Multiple debug static frameworks are incompatible with compiler caches.
So to workaround the issue you can either disable compiler caches by putting the following line into your gradle.properties:
kotlin.native.cacheKind=none
or make the frameworks dynamic by adding the following snippet to your Gradle build script:
kotlin {
targets.withType<org.jetbrains.kotlin.gradle.plugin.mpp.KotlinNativeTarget> {
binaries.withType<org.jetbrains.kotlin.gradle.plugin.mpp.Framework> {
isStatic = false
}
}
}
See https://youtrack.jetbrains.com/issue/KT-42254 for more details.
I guess current behaviour for multiple frameworks doesn't make much sense for the original topic starter, I'm just putting my answer here for anyone who might encounter the same issue.
My knowledge in iOS is not that much, so to my untrained eyes, it looks like each module is adding its own version of the things instead of using some resolutions strategy to share it.
This is exactly how it is supposed to work at this moment. But "versions of the things" in each of the frameworks are put into the separate independent namespaces, so there should be no linkage errors, and the one you've encountered is a bug.
The following was done with Android Studio 3.4, Android Gradle Plugin 3.3.2 and Gradle 4.10.3.
In the build.gradle file, I have configured some unit test options like this:
android {
testOptions {
unitTests.all {
systemProperty "debug","true"
}
}
}
I do have a test function that tries to read this property:
package com.demo;
public class SysPropTestDemo {
#Test
public static void dumpSysProps() {
System.out.println("sysprop(debug)=" + System.getProperty("debug"));
}
}
When run via command line gradlew test --test com.demo.SysPropTestDemo I will get the property debug set correctly to true. If I run the same test via Android Studio without setting any options, the value shown will be null.
In order to get the same result from Android Studio, I explicitly have to enter some values in the "Run/Debug Configurations" panel, i.e something like -Ddebug=true in the VM options.
Now this is a trivial example, but what I really want to do, is to add some path to the java.library.path property in order to be able to load a JNI library compiled within the project. (I do need to write some tests that make use a modified SQLite lib, so not using JNI is not an option here)
It does work when setting additional options, but I think this is very inconvenient, since I can't enter a variable based value in the configuration options (or at least, I don't know how to). To sum it up: when setting or changing values, I do have to go through a bunch of config screens where I would really prefer to have one place in a config file.
Shouldn't Android Studio somehow make use of the values specified in the build.gradle file? If not, the docs don't make it clear that the testOptions.unitTests.all settings can only be used via gradlew invocation.
Skybow,
I feel you have two questions
1. How to load jni lib for androidTest(not for 'test[non instrumented unit tests])
- copy your jni library in corresponding folder [JNI libraries: [app/src/androidTestFLAVORNAMEDebug/jniLibs]
- load your jni library
static {
try {
System.loadLibrary("xyzjni");
} catch (Exception e) {
Logger.error("Exception on loading the jni library : " + e.getMessage());
}
}
2. How to make android studio use your config variables defined for unitTests.
- It would have great if some text file is there which has all configs.
- Or it is part of build.gradle
- I don't have any detail on this.
I found that the famous open source example "u2020" has so many folders under src, and I can see them in the project view.
Source: https://github.com/JakeWharton/u2020
Screenshot:
To archive this in my project, I tried to make folders named "internal", "internalDebug", etc... but Android studio does not show them automatically.
I also tried to open and find the keyword "internalDebug" in u2020's build.gradle, but there is no such keyword.
How can I archive this? Any help will be appreciated.
I finally found answer. It was "productFlavors".
It is kind of build option that you can add new files to project. (Same file name is not allowed, it conflicts)
Details: https://developer.android.com/studio/build/build-variants.html
Source: https://github.com/JakeWharton/u2020/blob/master/build.gradle
productFlavors {
internal {
applicationId 'com.jakewharton.u2020.internal'
}
production {
applicationId 'com.jakewharton.u2020'
}
}
Each flavors can have additional folders that has suffix "Release" or "Debug".
For example, if you define flavor "foo", real folder structure can be:
src
- foo
- fooDebug
- fooRelease
- main <= Your original source here
- test
As per the introduction of Custom Class Loading in Dalvik by Fred Chung on the Android Developers Blog:
The Dalvik VM provides facilities for developers to perform custom
class loading. Instead of loading Dalvik executable (“dex”) files from
the default location, an application can load them from alternative
locations such as internal storage or over the network.
However, not many developers have the need to do custom class loading. But those who do and follow the instructions on that blog post, might have some problems mimicking the same behavior with Gradle, the new build system for Android introduced in Google I/O 2013.
How exactly one can adapt the new build system to perform the same intermediary steps as in the old (Ant based) build system?
My team and I recently reached the 64K method references in our app, which is the maximum number of supported in a dex file. To get around this limitation, we need to partition part of the program into multiple secondary dex files, and load them at runtime.
We followed the blog post mentioned in the question for the old, Ant based, build system and everything was working just fine. But we recently felt the need to move to the new build system, based on Gradle.
This answer does not intend to replace the full blog post with a complete example. Instead, it will simply explain how to use Gradle to tweak the build process and achieve the same thing. Please note that this is probably just one way of doing it and how we are currently doing it in our team. It doesn't necessarily mean it's the only way.
Our project is structured a little different and this example works as an individual Java project that will compile all the source code into .class files, assemble them into a single .dex file and to finish, package that single .dex file into a .jar file.
Let's start...
In the root build.gradle we have the following piece of code to define some defaults:
ext.androidSdkDir = System.env.ANDROID_HOME
if(androidSdkDir == null) {
Properties localProps = new Properties()
localProps.load(new FileInputStream(file('local.properties')))
ext.androidSdkDir = localProps['sdk.dir']
}
ext.buildToolsVersion = '18.0.1'
ext.compileSdkVersion = 18
We need the code above because although the example is an individual Java project, we still need to use components from the Android SDK. And we will also be needing some of the other properties later on... So, on the build.gradle of the main project, we have this dependency:
dependencies {
compile files("${androidSdkDir}/platforms/android-${compileSdkVersion}/android.jar")
}
We are also simplifying the source sets of this project, which might not be necessary for your project:
sourceSets {
main {
java.srcDirs = ['src']
}
}
Next, we change the default configuration of the build-in jar task to simply include the classes.dex file instead of all .class files:
configure(jar) {
include 'classes.dex'
}
Now we need to have new task that will actually assemble all .class files into a single .dex file. In our case, we also need to include the Protobuf library JAR into the .dex file. So I'm including that in the example here:
task dexClasses << {
String protobufJarPath = ''
String cmdExt = Os.isFamily(Os.FAMILY_WINDOWS) ? '.bat' : ''
configurations.compile.files.find {
if(it.name.startsWith('protobuf-java')) {
protobufJarPath = it.path
}
}
exec {
commandLine "${androidSdkDir}/build-tools/${buildToolsVersion}/dx${cmdExt}", '--dex',
"--output=${buildDir}/classes/main/classes.dex",
"${buildDir}/classes/main", "${protobufJarPath}"
}
}
Also, make sure you have the following import somewhere (usually at the top, of course) on your build.gradle file:
import org.apache.tools.ant.taskdefs.condition.Os
Now we must make the jar task depend on our dexClasses task, to make sure that our task is executed before the final .jar file is assembled. We do that with a simple line of code:
jar.dependsOn(dexClasses)
And we're done... Simply invoke Gradle with the usual assemble task and your final .jar file, ${buildDir}/libs/${archivesBaseName}.jar will contain a single classes.dex file (besides the MANIFEST.MF file). Just copy that into your app assets folder (you can always automate that with Gradle as we've done but that is out of scope of this question) and follow the rest of the blog post.
If you have any questions, just shout in the comments. I'll try to help to the best of my abilities.
The Android Studio Gradle plugin now provides native multidex support, which effectively solves the Android 65k method limit without having to manually load classes from a jar file, and thus makes Fred Chung's blog obsolete for that purpose. However, loading custom classes from a jar file at runtime in Android is still useful for the purpose of extensibility (e.g. making a plugin framework for your app), so I'll address that usage scenario below:
I have created a port of the original example app on Fred Chung's blog to Android Studio on my github page over here using the Android library plugin rather than the Java plugin. Instead of trying to modify the existing dex process to split up into two modules like in the blog, I've put the code which we want to go into the jar file into its own module, and added a custom task assembleExternalJar which dexes the necessary class files after the main assemble task has finished.
Here is relevant part of the build.gradle file for the library. If your library module has any dependencies which are not in the main project then you will probably need to modify this script to add them.
apply plugin: 'com.android.library'
// ... see github project for the full build.gradle file
// Define some tasks which are used in the build process
task copyClasses(type: Copy) { // Copy the assembled *.class files for only the current namespace into a new directory
// get directory for current namespace (PLUGIN_NAMESPACE = 'com.example.toastlib')
def namespacePath = PLUGIN_NAMESPACE.replaceAll("\\.","/")
// set source and destination directories
from "build/intermediates/classes/release/${namespacePath}/"
into "build/intermediates/dex/${namespacePath}/"
// exclude classes which don't have a corresponding .java entry in the source directory
def remExt = { name -> name.lastIndexOf('.').with {it != -1 ? name[0..<it] : name} }
eachFile {details ->
def thisFile = new File("${projectDir}/src/main/java/${namespacePath}/", remExt(details.name)+".java")
if (!(thisFile.exists())) {
details.exclude()
}
}
}
task assembleExternalJar << {
// Get the location of the Android SDK
ext.androidSdkDir = System.env.ANDROID_HOME
if(androidSdkDir == null) {
Properties localProps = new Properties()
localProps.load(new FileInputStream(file('local.properties')))
ext.androidSdkDir = localProps['sdk.dir']
}
// Make sure no existing jar file exists as this will cause dx to fail
new File("${buildDir}/intermediates/dex/${PLUGIN_NAMESPACE}.jar").delete();
// Use command line dx utility to convert *.class files into classes.dex inside jar archive
String cmdExt = Os.isFamily(Os.FAMILY_WINDOWS) ? '.bat' : ''
exec {
commandLine "${androidSdkDir}/build-tools/${BUILD_TOOLS_VERSION}/dx${cmdExt}", '--dex',
"--output=${buildDir}/intermediates/dex/${PLUGIN_NAMESPACE}.jar",
"${buildDir}/intermediates/dex/"
}
copyJarToOutputs.execute()
}
task copyJarToOutputs(type: Copy) {
// Copy the built jar archive to the outputs folder
from 'build/intermediates/dex/'
into 'build/outputs/'
include '*.jar'
}
// Set the dependencies of the build tasks so that assembleExternalJar does a complete build
copyClasses.dependsOn(assemble)
assembleExternalJar.dependsOn(copyClasses)
For more detailed information see the full source code for the sample app on my github.
See my answer over here. The key points are:
Use the additionalParameters property on the dynamically created dexCamelCase tasks to pass --multi-dex to dx and create multiple dex files.
Use the multidex class loader to use the multiple dex files.